I was always under the assumption that the constant onward march of technology would lead to things getting LESS expensive.
I was always under the assumption that the constant onward march of technology would lead to things getting LESS expensive.
Washing the dishes isn’t fun. It especially sucks when they’ve just had tomato sauce in them. The microwave’s right there, and it’s much faster than waiting for a big ol’ pot of water to boil. Here’s how I cook pasta for one person.
The cook times on the back of the pasta box are dead wrong. I like it al dente, and if I followed those times I’d have sad and limp noodles.
Before I go any further, you really should go read John Siracusa’s magnum opus on how to cook pasta. If you’re cooking for 2 or more people, just follow his guidelines instead. I’ve cribbed from this liberally for the microwave method I use.
No, really, go read it. This’ll still be here when you’re done.
You can buy plastic gizmos at the store, or you’ll see them advertised on TV, that claim to be perfect for cooking pasta in the microwave. They probably work, but…there’s no secret to them. I use a large Pyrex1 measuring cup and it works fine. The key part of those devices has nothing to do with the device itself: it’s just that they tell you to cook the pasta for 2 to 3 minutes longer than the box says to. We’ve already established that the box times are bullshit, but since I usually knock 2 to 3 minutes off that time to get a good estimate of how long to actually cook the pasta for…
…just cook the pasta in the microwave for as long as it says on the box. Really.
Pour some pasta into a microwave-safe vessel with plenty of extra room at the top. I use a 4-cup Pyrex measuring cup. Salt the living daylights out of your pasta, then add cold water from the tap. There should be enough water to cover the pasta after it expands, and ideally your vessel is large enough that boiling water won’t escape over the sides. Then, throw it in the microwave and set the timer to 75% of the box’s recommended cook time. (If it’s a range of times, set it to the lower option). I like to do this so there’s a chance to check on the pasta before it goes beyond the point of no return. Most of the time, the remaining 25% will get you right there.
Warm bowls are really nice. It’s a little thing, but every little thing helps. This is another tip I learned from Siracusa’s piece: put the pasta bowl(s) under your strainer, and let that hot pasta water sit in the bowl until it’s ready to accept the pasta. You may find this easier if you get a colander with arms that sits on the edge of the sink so you don’t have to hold it – I certainly do. Once you’ve strained the pasta, pour it back into your warm just-microwaved vessel and add whatever sauce you’re going to add. Since I assume you’re also cooking for one, I’m not going to go on about the superiority of homemade pasta sauce here. I’ve never made it. Just crack that jar open and pour some in, mix it up well, then empty your serving/eating bowl (careful, it’s hot and full of pasta water!) and refill it with pasta and sauce. Grate2 some Parmesan cheese on top while you’re at it. If you’re clever, you had some garlic bread cooking away in the toaster oven while you were doing all this.
A uniquely 2018 sense of serendipity occurs when you’re listening to an old Spotify playlist and a song that disappeared from streaming months ago suddenly shows back up.
Why do I feel pleased by this, and not annoyed like I did when it disappeared in the first place?
The Internet kinda sucks right now. It’s always sucked, in different ways, but the current era we live in feels particularly galling. This piece was inspired by two things that happend on the 29th of May:
I’ll come back for the idiom later. Until then, please enjoy this long rambling collection of my thoughts on the matter.
The World Wide Web as we know it was originally perceived as a particpatory medium. For many, this promise wasn’t fully realised until the advent of social media platforms. These platforms are also the source of the Internet’s current structural flaws and foundational problems. At the same time, the very same platforms are enablers for much of the good that comes out of this stupid network of computers. We’d probably be better off as a society if Facebook was broken up into its constituent parts, but we’d be worse off if the functionality was lost forever.
The rise of algorithmic timelines is a menace. On some sites, it just can’t be avoided – Instagram is very unlikely to bring back the chronologically-ordered feed of posts. For your general web browsing, however, it is possible to escape the algorithmic feed of whatever Facebook wants you to read: just go visit websites. Foster Kamer wrote an excellent piece last year about this very topic.
Interent users of a certain vintage, your humble author included, remember the glory days of Google Reader fondly. We mourn its passing, but it’s still possible to get that experience and create your own newsfeed, from just the websites you’re interested in. RSS readers are still alive and well for computers, phones, and tablets. I’ve been using an open-source project called CommaFeed to bring back that Reader feeling, and if you’ve got a bit of server space, locally or out on the Internet, it’s a great service.
This isn’t the first time someone’s written this kind of thought. Marco Arment said it better all the way back in 2011:
If you care about your online presence, you must own it.
Everything on the Internet feels permanent, right until the moment it doesn’t. It’s not the solution for everyone, but if you really care about the things you create or how you appear, hosting your own website is the best way to preserve it. Web services come and go. Medium didn’t exist 5 years ago, and in 5 years it may not exist anymore. The Internet Archive, blessed and comptetent as they are, is not a valid backup or recovery strategy for catastrophic platform existence failure.
A better, kinder, and more open Internet is possible. It’s within our grasp – but we have to make it. This is the same guiding principle behind free/open-source software development: everyone’s able to contribute and make improvements according to their needs. Wikipedia would never work without these principles of openness and creativity, and its success is a testament to the fact that it can work.
With a few exceptions, blogs started to die out around the time Facebook really started to gain in popularity. The primary metric by which a website’s success is judged has become the length of time spent on the site, but the length of the content keeps getting smaller. Catchy videos took the place of long text diatraibes, and gargantuan multi-hundred-megabyte web pages became the norm. Website bloat is a known and serious issue, and nearly all of it can be attributed to useless bullshit.1
You might assume, at this point, that I’m feeling pretty bleak and hopeless about the future of the Web. This is a safe assumption to make, but I’m not fully despondent yet. There is still good out there, even if it’s a little more hard to find than it perhaps once was. Even better, you can make meaningful changes to your Internet experience and start rediscovering the joy in an afternoon or two.
No great work or human endeavour was accomplished without at least a little effort. The computing revolution offered hope that the level of effort required would be drastically reduced and made accessible to everyone. The dream is murky, but it’s still out there. I wrote this pompous-ass manifesto because I felt inspired to start doing something with this blog I set up and that I’ve been trying to do something meaningful with. My hope for myself is that I’ll keep writing and keep sharing it.
In conclusion, here’s that idiom from earlier that I’ve been thinking about:
You may recall (or have searched for) this old post, where I documented the process needed to get Linux booting on a Skylake-based iMac. Since that post, a few things have changed, and it’s about time for an updated post now that I’ve had a few minutes to try some things out.
Luckily, unlike last time, most recent versions of popular Linux distributions come with shiny new kernels. I’ve tested this with Fedora 27 and Ubuntu 17.10 – I fully expect Arch, or any distro with a default kernel of 4.13 or greater, to work similarly. Here’s the process:
Optional: Use Disk Utility to shrink the Mac OS partition and make some unallocated free space for Linux. You don’t have to do this, but it may come in handy if Apple releases a firmware update. I’d also recommend leaving a small Mac OS partition if you’re using Ubuntu – see the footnote below.1
Prepare a USB flash drive using the installation ISO of your choice.
Hold down the
Option key while booting up your Mac, and select the flash drive. It may show up as
e at the installer’s GRUB menu to edit the boot parameters. You’ll need to add this to the end of the line that usually ends with
quiet splash or similar:
Press F10 to continue booting up. Only one CPU core will be visible to the system as a result of these kernel flags – this is temporary, however. Full performance will be restored later. 2
Continue the installation process – if you’re using the whole SSD and not planning to dual-boot Mac OS, feel free to let the Linux installer do what it wants with the whole drive. If you’re are dual-booting, you may need to configure the partition layout manually. Be sure to make a small (512MB or so) partition for
/boot, where the bootloader will be installed.
After installation completes, reboot into the new Linux installation. At the GRUB screen, press
e and add the following kernel flags:
irqpoll no_timer_check nomodeset
Press F10 to continue booting.
After installation, edit the GRUB default configuration to include the kernel flags from step 5 by default. This varies based on distribution – the defaults are typically located in
/etc/default/grub. Do note that the Fedora installer includes any kernel flags set in the installation ISO by default, so you’ll need to remove them from the file.
In my testing, this setup persists and remains stable after installation and updates. Fedora 27, for example, was updated from kernel 4.13.9 to 4.14.14 as part of the standard system update process, and the kernel flags were retained and continue to work.
I can’t deal with iTunes anymore. It’s finally become too terrible to use on a regular basis. This, along with the impending failure of the hard drive in my beloved iPod classic, led to my purchase of a new MP3 player.1 My choice was the Sony Walkman NW-A35, which has Bluetooth and plays FLACs and accepts a microSD card – it does all the things you’d want an MP3 player to do nowadays. It also has a headphone jack.
My two gripes with it:
One of these problems is easily fixed though, since the internal storage and the SD card both show up as normal mass storage devices when you plug it in to a computer. Here’s where the real fun begins: my desktop music player/library of choice post-iTunes is Clementine, an excellent open-source piece of software that does everything I want it to do. The playlist files it generates, however, are based on absolute paths to my music library. Even more frustratingly, for some unknown reason on my Mac it refuses to actually copy files to the Walkman. Instead of troubleshooting this issue, I wrote a quick Bash script to copy files to an arbitrary destination while preserving the artist>album folder hierarchy, and then creates a new M3U playlist file with relative paths suitable for use in my MP3 player. I suspect it’ll work for other MP3 players too, although I don’t have any to test.
You can find the script I wrote,
m3umangle, at this GitHub repository. Pull requests are accepted and appreciated.
If you’re using the excellent netdata for server monitoring and want to stick the data in a database for long-term storage and pretty graphs, this generally works. If you’re feeling fancy, you can set up retention policies in InfluxDB – this post assumes you’ve already got a working InfluxDB + Grafana stack set up somewhere. Kudos to this blog post, which is referenced by the netdata documentation.
Add this section to your
netdata.conf file, replacing
$INFLUXDB with a more relevant IP address:
[backend] host tags = $TAG enabled = yes data source = average type = opentsdb destination = tcp:$INFLUXDB:$PORT prefix = $PREFIX hostname = $HOSTNAME update every = 10 buffer on failures = 10 timeout ms = 20000 # send names instead of ids = yes # send charts matching = *
Over on your InfluxDB server, make a database for all this data to end up in, and make sure you’ve got an OpenTSDB listening service in
[[opentsdb]] enabled = true bind-address = ":$PORT" database = "opentsdb"
Once you’ve restarted both services, netdata should be happily filling your InfluxDB server with data. You can grab an example Grafana dashboard in JSON format here – just be sure to replace
$PREFIX with the prefix you set in the netdata config file, and
$HOSTNAME with the hostname. It should start showing data immediately.
This post has been superseded by an updated version, available here.
After bashing my head against the metaphorical wall for a week or so, I have finally managed to install Linux on a 27” Retina iMac with Skylake. This is what I’ve learned, and it might help you if you’re trying to do the same thing.
The usual procedure applies here, if you’ve installed Linux on an Intel Mac before. This method doesn’t require you to install any additional software like rEFInd, although you certainly can if you want to. It might be easier if you plan on triple-booting with Windows as well.1
Bottom line, this is going to work best with Linux kernel 4.6 or later. This is due to both Skylake support that exists in that kernel, and the AMDGPU open-source driver, which works very nicely with the R9 graphics cards that retina iMacs ship with. The easiest way to accomplish this is to use a distribution that’s already shipping with kernel 4.6 or greater. I’ve had success with Arch and Fedora with the following:
Optionkey when booting the Mac, and select your flash drive.
Edit the GRUB entry for the live installation environment with the following kernel parameters:
irqpoll radeon.modeset=1 no_timer_check2
You may notice that Ctrl-X doesn’t work in GRUB. I don’t know why that is, and would love to know the reason. Press F10 to boot from the GRUB editing screen - some distributions mention F10 in their instructions, and some don’t. It should work universally, though.
Optionkey when booting to pick from your available partitions.
The following notes are especially useful for people who want to install Ubuntu – but the basic concept is the same for any distribution you like that doesn’t yet come with kernel 4.6 or newer. Basically, you’re doing the same thing as above, except with some different kernel parameters to get the OS to boot. Once the OS is installed, you’ll be installing kernel 4.6 or greater, and then using the parameters from step 3 above.
Edit the GRUB entry with parameters
acpi=off nomodeset when booting from the installation ISO. These are bare-bones parameters that only show 1 core and will deliver low performance, but this isn’t an issue for the installation process. Full performance will be restored later.
After installation and rebooting, install kernel 4.6. Most distributions provide some way of obtaining mainline Linux kernels that are pre-packaged for the distro’s package manager. For example, you can grab 4.6.3 (the newest kernel at time of writing) from Ubuntu’s mainline kernel PPA and install it with apt. Once it’s installed, edit the GRUB config with those parameters from step 3 above, and you should be all set.
Unity won’t work without the GPU drivers installed. This means that the normal Ubuntu installer ISO will not work. Point blank. Don’t bother. Unity uses Compiz as its rendering engine, and it won’t load unless it’s happy with the GPU driver situation. This means the installer itself won’t load either. Good job, guys.
To install Ubuntu, grab the Xubuntu installer. (Any desktop environment that doens’t use Compiz by default will work.) After you’ve installed the OS, feel free to install the DE of your choice.
As of today (2016-07-11), this information should be correct. Future kernel versions will hopefully remove the need for some of these settings. Until that time, good luck!
no_timer_checkwas the secret sauce that got the iMac booting with all 4 CPU cores. Using
nolapicas a kernel flag also boots successfully, but with only 1 core visible to the OS.
irqpolloption is necesary due to some observed keyboard weirdness. Finally, using
radeon.modeset=1allows the internal DisplayPort system that drives the LCD to negotiate display resolution and other such details. Setting it to 0 is useful for troubleshooting purposes, and will force a lower resolution. ⏎
In retrospect, I should’ve tried to get more pictures of the outside of the train. The platforms were a little bit crowded to to try and get good clear shots, though. More pictures are on my Flickr for your perusal.
This site1 uses Jekyll to generate static pages, which I manage using GitLab. I was using a hacked-together local Git hook to regenerate the site and copy the files to my webserver’s root directory after each commit, but I wanted a solution that both worked no matter where I was commiting and pushing changes from and didn’t involve PHP.2
The obvious solution is a Web hook. I found a couple of PHP scripts that probably would have worked, but I really wasn’t in the mood to klutz around with my kinda-working PHP configuration. I then found git-fish, a Node.js web hook listener. Luckily, my quick-and-dirty bash script still works fine. Here’s a blank example you can use:3
Once you’ve done this, configure git-fish to listen on a certain port (and make sure that port’s open on the firewall), have it call the script when it recieves an event, and you should be done!