I can’t deal with iTunes anymore. It’s finally become too terrible to use on a regular basis. This, along with the impending failure of the hard drive in my beloved iPod classic, led to my purchase of a new MP3 player.1 My choice was the Sony Walkman NW-A35, which has Bluetooth and plays FLACs and accepts a microSD card – it does all the things you’d want an MP3 player to do nowadays. It also has a headphone jack.
My two gripes with it:
It uses a weird proprietary cable for charging and data transfer.2
The software utilities provided for transferring files and playlists to it are, not to put too fine a point on it, fucking useless.
One of these problems is easily fixed though, since the internal storage and the SD card both show up as normal mass storage devices when you plug it in to a computer. Here’s where the real fun begins: my desktop music player/library of choice post-iTunes is Clementine, an excellent open-source piece of software that does everything I want it to do. The playlist files it generates, however, are based on absolute paths to my music library. Even more frustratingly, for some unknown reason on my Mac it refuses to actually copy files to the Walkman. Instead of troubleshooting this issue, I wrote a quick Bash script to copy files to an arbitrary destination while preserving the artist>album folder hierarchy, and then creates a new M3U playlist file with relative paths suitable for use in my MP3 player. I suspect it’ll work for other MP3 players too, although I don’t have any to test.
You can find the script I wrote, m3umangle, at this GitHub repository. Pull requests are accepted and appreciated.
If you’re using the excellent netdata for server monitoring and want to stick the data in a database for long-term storage and pretty graphs, this generally works. If you’re feeling fancy, you can set up retention policies in InfluxDB – this post assumes you’ve already got a working InfluxDB + Grafana stack set up somewhere. Kudos to this blog post, which is referenced by the netdata documentation.
Add this section to your netdata.conf file, replacing $INFLUXDB with a more relevant IP address:
host tags = $TAG
enabled = yes
data source = average
type = opentsdb
destination = tcp:$INFLUXDB:$PORT
prefix = $PREFIX
hostname = $HOSTNAME
update every = 10
buffer on failures = 10
timeout ms = 20000
# send names instead of ids = yes
# send charts matching = *
Over on your InfluxDB server, make a database for all this data to end up in, and make sure you’ve got an OpenTSDB listening service in influx.conf:
Once you’ve restarted both services, netdata should be happily filling your InfluxDB server with data. You can grab an example Grafana dashboard in JSON format here – just be sure to replace $PREFIX with the prefix you set in the netdata config file, and $HOSTNAME with the hostname. It should start showing data immediately.
After bashing my head against the metaphorical wall for a week or so, I have finally managed to install Linux on a 27” Retina iMac with Skylake. This is what I’ve learned, and it might help you if you’re trying to do the same thing.
The usual procedure applies here, if you’ve installed Linux on an Intel Mac before. This method doesn’t require you to install any additional software like rEFInd, although you certainly can if you want to. It might be easier if you plan on triple-booting with Windows as well.1
Bottom line, this is going to work best with Linux kernel 4.6 or later. This is due to both Skylake support that exists in that kernel, and the AMDGPU open-source driver, which works very nicely with the R9 graphics cards that retina iMacs ship with. The easiest way to accomplish this is to use a distribution that’s already shipping with kernel 4.6 or greater. I’ve had success with Arch and Fedora with the following:
Prepare a USB flash drive using the distribution’s live ISO.
Hold down the Option key when booting the Mac, and select your flash drive.
Edit the GRUB entry for the live installation environment with the following kernel parameters:
You may notice that Ctrl-X doesn’t work in GRUB. I don’t know why that is, and would love to know the reason. Press F10 to boot from the GRUB editing screen - some distributions mention F10 in their instructions, and some don’t. It should work universally, though.
Install Linux as usual to any empty non-partitioned space on your hard drive.3 It’s okay (and required, if you’re not using rEFInd) to make a small EFI partition for Linux to install the bootloader to. In the future, when switching between operating sytems, you can hold down the Option key when booting to pick from your available partitions.
Reboot from the installer into your fresh Linux installation. You’ll need to edit the GRUB entry using the parameter from step 3 again (for the last time, I swear!)
Once you’ve booted into Linux, edit your GRUB configuration files to include the parameters from step 3 by default. Regenerate the GRUB files using your distro’s tools, and reboot one last time to make sure it works. If it doesn’t work, you can always edit the entry at boot time again to get back into the system.
Distributions with old kernels
The following notes are especially useful for people who want to install Ubuntu – but the basic concept is the same for any distribution you like that doesn’t yet come with kernel 4.6 or newer. Basically, you’re doing the same thing as above, except with some different kernel parameters to get the OS to boot. Once the OS is installed, you’ll be installing kernel 4.6 or greater, and then using the parameters from step 3 above.
Edit the GRUB entry with parameters acpi=off nomodeset when booting from the installation ISO. These are bare-bones parameters that only show 1 core and will deliver low performance, but this isn’t an issue for the installation process. Full performance will be restored later.
After installation and rebooting, install kernel 4.6. Most distributions provide some way of obtaining mainline Linux kernels that are pre-packaged for the distro’s package manager. For example, you can grab 4.6.3 (the newest kernel at time of writing) from Ubuntu’s mainline kernel PPA and install it with apt. Once it’s installed, edit the GRUB config with those parameters from step 3 above, and you should be all set.
Screw you, Canonical
Unity won’t work without the GPU drivers installed. This means that the normal Ubuntu installer ISO will not work. Point blank. Don’t bother. Unity uses Compiz as its rendering engine, and it won’t load unless it’s happy with the GPU driver situation. This means the installer itself won’t load either. Good job, guys.
To install Ubuntu, grab the Xubuntu installer. (Any desktop environment that doens’t use Compiz by default will work.) After you’ve installed the OS, feel free to install the DE of your choice.
As of today (2016-07-11), this information should be correct. Future kernel versions will hopefully remove the need for some of these settings. Until that time, good luck!
It's generally recommended to keep a small partition on your drive with Mac OS left installed on it, since firmware updates are provided only via Mac executables.
After experimentation, no_timer_check was the secret sauce that got the iMac booting with all 4 CPU cores. Using nolapic as a kernel flag also boots successfully, but with only 1 core visible to the OS.
The irqpoll option is necesary due to some observed keyboard weirdness. Finally, using radeon.modeset=1 allows the internal DisplayPort system that drives the LCD to negotiate display resolution and other such details. Setting it to 0 is useful for troubleshooting purposes, and will force a lower resolution.
You did shrink your Mac OS partition to make free space for Linux before you started, right?
Every December, vintage subway trains run in New York City. I grabbed some pictures of them.
In retrospect, I should’ve tried to get more pictures of the outside of the train. The platforms were a little bit crowded to to try and get good clear shots, though. More pictures are on my Flickr for your perusal.
This site1 uses Jekyll to generate static pages, which I manage using GitLab. I was using a hacked-together local Git hook to regenerate the site and copy the files to my webserver’s root directory after each commit, but I wanted a solution that both worked no matter where I was commiting and pushing changes from and didn’t involve PHP.2
The obvious solution is a Web hook. I found a couple of PHP scripts that probably would have worked, but I really wasn’t in the mood to klutz around with my kinda-working PHP configuration. I then found git-fish, a Node.js web hook listener. Luckily, my quick-and-dirty bash script still works fine. Here’s a blank example you can use:3
Once you’ve done this, configure git-fish to listen on a certain port (and make sure that port’s open on the firewall), have it call the script when it recieves an event, and you should be done!