The Internet kinda sucks right now. It’s always sucked, in different ways, but the current era we live in feels particularly galling. This piece was inspired by two things that happend on the 29th of May:
Secondly, I had conversation with a friend about blogs, of all things, and the current state of the Internet. During the course of this, I pointed out that it’s probably never been easier to own a little corner of the Internet and put whatever you want on there, and that “it’s yours for the taking”. I’m a sucker for a well-used idiom, but I’ve been mulling it over, and it’s not quite as accurate as I’d like.
I’ll come back for the idiom later. Until then, please enjoy this long rambling collection of my thoughts on the matter.
Algorithims Rule Everything Around Us (If You Let Them)
The World Wide Web as we know it was originally perceived as a particpatory medium. For many, this promise wasn’t fully realised until the advent of social media platforms. These platforms are also the source of the Internet’s current structural flaws and foundational problems. At the same time, the very same platforms are enablers for much of the good that comes out of this stupid network of computers. We’d probably be better off as a society if Facebook was broken up into its constituent parts, but we’d be worse off if the functionality was lost forever.
The rise of algorithmic timelines is a menace. On some sites, it just can’t be avoided – Instagram is very unlikely to bring back the chronologically-ordered feed of posts. For your general web browsing, however, it is possible to escape the algorithmic feed of whatever Facebook wants you to read: just go visit websites. Foster Kamer wrote an excellent piece last year about this very topic.
Interent users of a certain vintage, your humble author included, remember the glory days of Google Reader fondly. We mourn its passing, but it’s still possible to get that experience and create your own newsfeed, from just the websites you’re interested in. RSS readers are still alive and well for computers, phones, and tablets. I’ve been using an open-source project called CommaFeed to bring back that Reader feeling, and if you’ve got a bit of server space, locally or out on the Internet, it’s a great service.
If you care about your online presence, you must own it.
Everything on the Internet feels permanent, right until the moment it doesn’t. It’s not the solution for everyone, but if you really care about the things you create or how you appear, hosting your own website is the best way to preserve it. Web services come and go. Medium didn’t exist 5 years ago, and in 5 years it may not exist anymore. The Internet Archive, blessed and comptetent as they are, is not a valid backup or recovery strategy for catastrophic platform existence failure.
The Point I’m Trying To Get To Here
A better, kinder, and more open Internet is possible. It’s within our grasp – but we have to make it. This is the same guiding principle behind free/open-source software development: everyone’s able to contribute and make improvements according to their needs. Wikipedia would never work without these principles of openness and creativity, and its success is a testament to the fact that it can work.
With a few exceptions, blogs started to die out around the time Facebook really started to gain in popularity. The primary metric by which a website’s success is judged has become the length of time spent on the site, but the length of the content keeps getting smaller. Catchy videos took the place of long text diatraibes, and gargantuan multi-hundred-megabyte web pages became the norm. Website bloat is a known and serious issue, and nearly all of it can be attributed to useless bullshit.1
You might assume, at this point, that I’m feeling pretty bleak and hopeless about the future of the Web. This is a safe assumption to make, but I’m not fully despondent yet. There is still good out there, even if it’s a little more hard to find than it perhaps once was. Even better, you can make meaningful changes to your Internet experience and start rediscovering the joy in an afternoon or two.
Five steps to a better Internet:
Run a good ad blocker. You might even notice that your computer’s running better without the weight of all those ads.
Support people who make things you like. Patreon, flaws and all, is probably one of the best things to have happened for people who create things online. Some of the best journalism out there is behind a paywall. In the current political climate, supporting a free press has never been more important.2
Be intentional. Visit websites and services you find interesting, and make a point to consciously go there instead of being led somewhere by the whims of an algorithim-based recommendation. While you’re at it, stop reading comment sections. With very rare exceptions, a public forum for people to tack their thoughts on to the end of other people’s content will inevitably result in unpleasnatries.3
Consider renting a server. Virtual servers can be had for $5 a month, and although they’re not powerful, it’s more than enough to host a website like this one. A small server is an excellent starting point, but once you’ve discovered the possibilites opened up by having your own server, you may end up wanting an upgrade to something more powerful. This isn’t a prequisite, but you may find it useful if you decide to…
Make something! Record a song, learn to code, take some photos, write a shitty blog like this one. If you feel like something’s missing from the Internet, you could be the one to put it there.
No great work or human endeavour was accomplished without at least a little effort. The computing revolution offered hope that the level of effort required would be drastically reduced and made accessible to everyone. The dream is murky, but it’s still out there. I wrote this pompous-ass manifesto because I felt inspired to start doing something with this blog I set up and that I’ve been trying to do something meaningful with. My hope for myself is that I’ll keep writing and keep sharing it.
In conclusion, here’s that idiom from earlier that I’ve been thinking about:
The Internet is yours for the taking, but it is also yours for the making.
At the time of writing, European Union privacy regulations are helping me prove my point. USA Today created a version of their website that lacks advertiser tracking scripts, and it reduced their front page's loading time from 45 seconds to under 5.
This assumes, of course, that your local newspaper hasn't been purchased by a hedge fund or some other nefarious force. Spend your money wisely.
Long and angry Twitter reply threads are definitely included here. Twitter is a double-edged word -- I'm personally fond of it and it can be fantastic, but it's entirely too easy to become embroiled in pointless drama.
You may recall (or have searched for) this old post, where I documented the process needed to get Linux booting on a Skylake-based iMac. Since that post, a few things have changed, and it’s about time for an updated post now that I’ve had a few minutes to try some things out.
Luckily, unlike last time, most recent versions of popular Linux distributions come with shiny new kernels. I’ve tested this with Fedora 27 and Ubuntu 17.10 – I fully expect Arch, or any distro with a default kernel of 4.13 or greater, to work similarly. Here’s the process:
Optional: Use Disk Utility to shrink the Mac OS partition and make some unallocated free space for Linux. You don’t have to do this, but it may come in handy if Apple releases a firmware update. I’d also recommend leaving a small Mac OS partition if you’re using Ubuntu – see the footnote below.1
Prepare a USB flash drive using the installation ISO of your choice.
Hold down the Option key while booting up your Mac, and select the flash drive. It may show up as EFI Boot.
Press e at the installer’s GRUB menu to edit the boot parameters. You’ll need to add this to the end of the line that usually ends with quiet splash or similar:
Press F10 to continue booting up. Only one CPU core will be visible to the system as a result of these kernel flags – this is temporary, however. Full performance will be restored later. 2
Continue the installation process – if you’re using the whole SSD and not planning to dual-boot Mac OS, feel free to let the Linux installer do what it wants with the whole drive. If you’re are dual-booting, you may need to configure the partition layout manually. Be sure to make a small (512MB or so) partition for /boot, where the bootloader will be installed.
After installation completes, reboot into the new Linux installation. At the GRUB screen, press e and add the following kernel flags:
irqpoll no_timer_check nomodeset
Press F10 to continue booting.
After installation, edit the GRUB default configuration to include the kernel flags from step 5 by default. This varies based on distribution – the defaults are typically located in /etc/default/grub. Do note that the Fedora installer includes any kernel flags set in the installation ISO by default, so you’ll need to remove them from the file.
In my testing, this setup persists and remains stable after installation and updates. Fedora 27, for example, was updated from kernel 4.13.9 to 4.14.14 as part of the standard system update process, and the kernel flags were retained and continue to work.
In recent versions of Ubuntu, the default behaviour is to not show the GRUB menu at all if Ubuntu is the only detected operating system installed. A dual-boot configuration will enable the menu by default. You can also attempt to modify the system's GRUB settings from a live USB, but I haven't had the opportunity to test this to any significant degree.
I don't know why this doesn't work using the kernel flags that are used after the system installed -- I can only chalk it up to some weird difference in the live ISO environment.
I can’t deal with iTunes anymore. It’s finally become too terrible to use on a regular basis. This, along with the impending failure of the hard drive in my beloved iPod classic, led to my purchase of a new MP3 player.1 My choice was the Sony Walkman NW-A35, which has Bluetooth and plays FLACs and accepts a microSD card – it does all the things you’d want an MP3 player to do nowadays. It also has a headphone jack.
My two gripes with it:
It uses a weird proprietary cable for charging and data transfer.2
The software utilities provided for transferring files and playlists to it are, not to put too fine a point on it, fucking useless.
One of these problems is easily fixed though, since the internal storage and the SD card both show up as normal mass storage devices when you plug it in to a computer. Here’s where the real fun begins: my desktop music player/library of choice post-iTunes is Clementine, an excellent open-source piece of software that does everything I want it to do. The playlist files it generates, however, are based on absolute paths to my music library. Even more frustratingly, for some unknown reason on my Mac it refuses to actually copy files to the Walkman. Instead of troubleshooting this issue, I wrote a quick Bash script to copy files to an arbitrary destination while preserving the artist>album folder hierarchy, and then creates a new M3U playlist file with relative paths suitable for use in my MP3 player. I suspect it’ll work for other MP3 players too, although I don’t have any to test.
You can find the script I wrote, m3umangle, at this GitHub repository. Pull requests are accepted and appreciated.
If you’re using the excellent netdata for server monitoring and want to stick the data in a database for long-term storage and pretty graphs, this generally works. If you’re feeling fancy, you can set up retention policies in InfluxDB – this post assumes you’ve already got a working InfluxDB + Grafana stack set up somewhere. Kudos to this blog post, which is referenced by the netdata documentation.
Add this section to your netdata.conf file, replacing $INFLUXDB with a more relevant IP address:
host tags = $TAG
enabled = yes
data source = average
type = opentsdb
destination = tcp:$INFLUXDB:$PORT
prefix = $PREFIX
hostname = $HOSTNAME
update every = 10
buffer on failures = 10
timeout ms = 20000
# send names instead of ids = yes
# send charts matching = *
Over on your InfluxDB server, make a database for all this data to end up in, and make sure you’ve got an OpenTSDB listening service in influx.conf:
Once you’ve restarted both services, netdata should be happily filling your InfluxDB server with data. You can grab an example Grafana dashboard in JSON format here – just be sure to replace $PREFIX with the prefix you set in the netdata config file, and $HOSTNAME with the hostname. It should start showing data immediately.
This post has been superseded by an updated version, available here.
After bashing my head against the metaphorical wall for a week or so, I have finally managed to install Linux on a 27” Retina iMac with Skylake. This is what I’ve learned, and it might help you if you’re trying to do the same thing.
The usual procedure applies here, if you’ve installed Linux on an Intel Mac before. This method doesn’t require you to install any additional software like rEFInd, although you certainly can if you want to. It might be easier if you plan on triple-booting with Windows as well.1
Bottom line, this is going to work best with Linux kernel 4.6 or later. This is due to both Skylake support that exists in that kernel, and the AMDGPU open-source driver, which works very nicely with the R9 graphics cards that retina iMacs ship with. The easiest way to accomplish this is to use a distribution that’s already shipping with kernel 4.6 or greater. I’ve had success with Arch and Fedora with the following:
Prepare a USB flash drive using the distribution’s live ISO.
Hold down the Option key when booting the Mac, and select your flash drive.
Edit the GRUB entry for the live installation environment with the following kernel parameters:
You may notice that Ctrl-X doesn’t work in GRUB. I don’t know why that is, and would love to know the reason. Press F10 to boot from the GRUB editing screen - some distributions mention F10 in their instructions, and some don’t. It should work universally, though.
Install Linux as usual to any empty non-partitioned space on your hard drive.3 It’s okay (and required, if you’re not using rEFInd) to make a small EFI partition for Linux to install the bootloader to. In the future, when switching between operating sytems, you can hold down the Option key when booting to pick from your available partitions.
Reboot from the installer into your fresh Linux installation. You’ll need to edit the GRUB entry using the parameter from step 3 again (for the last time, I swear!)
Once you’ve booted into Linux, edit your GRUB configuration files to include the parameters from step 3 by default. Regenerate the GRUB files using your distro’s tools, and reboot one last time to make sure it works. If it doesn’t work, you can always edit the entry at boot time again to get back into the system.
Distributions with old kernels
The following notes are especially useful for people who want to install Ubuntu – but the basic concept is the same for any distribution you like that doesn’t yet come with kernel 4.6 or newer. Basically, you’re doing the same thing as above, except with some different kernel parameters to get the OS to boot. Once the OS is installed, you’ll be installing kernel 4.6 or greater, and then using the parameters from step 3 above.
Edit the GRUB entry with parameters acpi=off nomodeset when booting from the installation ISO. These are bare-bones parameters that only show 1 core and will deliver low performance, but this isn’t an issue for the installation process. Full performance will be restored later.
After installation and rebooting, install kernel 4.6. Most distributions provide some way of obtaining mainline Linux kernels that are pre-packaged for the distro’s package manager. For example, you can grab 4.6.3 (the newest kernel at time of writing) from Ubuntu’s mainline kernel PPA and install it with apt. Once it’s installed, edit the GRUB config with those parameters from step 3 above, and you should be all set.
Screw you, Canonical
Unity won’t work without the GPU drivers installed. This means that the normal Ubuntu installer ISO will not work. Point blank. Don’t bother. Unity uses Compiz as its rendering engine, and it won’t load unless it’s happy with the GPU driver situation. This means the installer itself won’t load either. Good job, guys.
To install Ubuntu, grab the Xubuntu installer. (Any desktop environment that doens’t use Compiz by default will work.) After you’ve installed the OS, feel free to install the DE of your choice.
As of today (2016-07-11), this information should be correct. Future kernel versions will hopefully remove the need for some of these settings. Until that time, good luck!
It's generally recommended to keep a small partition on your drive with Mac OS left installed on it, since firmware updates are provided only via Mac executables.
After experimentation, no_timer_check was the secret sauce that got the iMac booting with all 4 CPU cores. Using nolapic as a kernel flag also boots successfully, but with only 1 core visible to the OS.
The irqpoll option is necesary due to some observed keyboard weirdness. Finally, using radeon.modeset=1 allows the internal DisplayPort system that drives the LCD to negotiate display resolution and other such details. Setting it to 0 is useful for troubleshooting purposes, and will force a lower resolution.
You did shrink your Mac OS partition to make free space for Linux before you started, right?
Every December, vintage subway trains run in New York City. I grabbed some pictures of them.
In retrospect, I should’ve tried to get more pictures of the outside of the train. The platforms were a little bit crowded to to try and get good clear shots, though. More pictures are on my Flickr for your perusal.
This site1 uses Jekyll to generate static pages, which I manage using GitLab. I was using a hacked-together local Git hook to regenerate the site and copy the files to my webserver’s root directory after each commit, but I wanted a solution that both worked no matter where I was commiting and pushing changes from and didn’t involve PHP.2
The obvious solution is a Web hook. I found a couple of PHP scripts that probably would have worked, but I really wasn’t in the mood to klutz around with my kinda-working PHP configuration. I then found git-fish, a Node.js web hook listener. Luckily, my quick-and-dirty bash script still works fine. Here’s a blank example you can use:3
Once you’ve done this, configure git-fish to listen on a certain port (and make sure that port’s open on the firewall), have it call the script when it recieves an event, and you should be done!