I eat a lot of junk food. Here’s my thoughts on some new things I tried recently.
If you’re unfamiliar with what I consider to be a uniquely Mid-Atlantic/Pennsylvanian phenomenon (namely, gas station conveninece stores having half-decent delis inside serving actual food), here’s the rundown:
- Order food from a touchscreen kiosk
- Wait a bit after you’ve paid
- Enjoy a resonably-tasty sandwich, salad, or hot dog that hasn’t spent 6 hours on a roller grill
The ordering process is where things started getting a bit funky with these sliders. I wanted cheese, and the touchscreen offered a few different options (both white and yellow American, cheddar, provolone). I picked cheddar, and after I paid the people at the counter asked me if I wanted the shredded or sliced cheddar. Seemed like an odd query, and I’m going to chalk it up to unfamiliarity with a new menu item.
The sliders come in the same box they use for hot dogs.
This amused me more than the sliders themselves did.
How do they taste? Fine. They’re okay. The beef patty reminds me uncomfortably of the hamburgers Sodexo serves in college dorm cafeterias. It’s as generic a burger as you could ever hope for. One of the available condiments is a horseradish mayonnaise, which is pretty good. I found this more interesting than the sliders themselves.
Just go to White Castle. They’re also open 24/7.
Hershey’s Candy Corn Creme Bar
Full disclosure: I like candy corn. Many people don’t, and that’s fine! If you don’t like candy corn, though, you’re not gonna like this candy bar. You might not like it even if you do like candy corn. I couldn’t finish this. Whatever the “candy corn creme” is made of, it’s cloying to the point of painfulness. I had one square of it, then another to confirm my findings… that was enough. The smell hits you first – crack the wrapper open and it smells strongly of sugar, a painful wave of saccharine artificiality.
the internet is definitely the new the library of alexandria
we don’t know that we’re the librarians. shelves are burning down every day, and nobody knows why. sometimes a shelf burns down we didn’t even know we had.
I was always under the assumption that the constant onward march of technology would lead to things getting LESS expensive.
Washing the dishes isn’t fun. It especially sucks when they’ve just had tomato sauce in them. The microwave’s right there, and it’s much faster than waiting for a big ol’ pot of water to boil. Here’s how I cook pasta for one person.
The cook times on the back of the pasta box are dead wrong. I like it al dente, and if I followed those times I’d have sad and limp noodles.
Before I go any further, you really should go read John Siracusa’s magnum opus on how to cook pasta. If you’re cooking for 2 or more people, just follow his guidelines instead. I’ve cribbed from this liberally for the microwave method I use.
No, really, go read it. This’ll still be here when you’re done.
In Which I Contradict Myself
You can buy plastic gizmos at the store, or you’ll see them advertised on TV, that claim to be perfect for cooking pasta in the microwave. They probably work, but…there’s no secret to them. I use a large Pyrex measuring cup and it works fine. The key part of those devices has nothing to do with the device itself: it’s just that they tell you to cook the pasta for 2 to 3 minutes longer than the box says to. We’ve already established that the box times are bullshit, but since I usually knock 2 to 3 minutes off that time to get a good estimate of how long to actually cook the pasta for…
…just cook the pasta in the microwave for as long as it says on the box. Really.
Pour some pasta into a microwave-safe vessel with plenty of extra room at the top. I use a 4-cup Pyrex measuring cup. Salt the living daylights out of your pasta, then add cold water from the tap. There should be enough water to cover the pasta after it expands, and ideally your vessel is large enough that boiling water won’t escape over the sides. Then, throw it in the microwave and set the timer to 75% of the box’s recommended cook time. (If it’s a range of times, set it to the lower option). I like to do this so there’s a chance to check on the pasta before it goes beyond the point of no return. Most of the time, the remaining 25% will get you right there.
Don’t Waste That Pasta Water
Warm bowls are really nice. It’s a little thing, but every little thing helps. This is another tip I learned from Siracusa’s piece: put the pasta bowl(s) under your strainer, and let that hot pasta water sit in the bowl until it’s ready to accept the pasta. You may find this easier if you get a colander with arms that sits on the edge of the sink so you don’t have to hold it – I certainly do. Once you’ve strained the pasta, pour it back into your warm just-microwaved vessel and add whatever sauce you’re going to add. Since I assume you’re also cooking for one, I’m not going to go on about the superiority of homemade pasta sauce here. I’ve never made it. Just crack that jar open and pour some in, mix it up well, then empty your serving/eating bowl (careful, it’s hot and full of pasta water!) and refill it with pasta and sauce. Grate some Parmesan cheese on top while you’re at it. If you’re clever, you had some garlic bread cooking away in the toaster oven while you were doing all this.
A uniquely 2018 sense of serendipity occurs when you’re listening to an old Spotify playlist and a song that disappeared from streaming months ago suddenly shows back up.
Why do I feel pleased by this, and not annoyed like I did when it disappeared in the first place?
A Manifesto On And For The Internet
The Internet kinda sucks right now. It’s always sucked, in different ways, but the current era we live in feels particularly galling. This piece was inspired by two things that happend on the 29th of May:
- Firstly, an episode of Chris Hayes’ podcast, Why is This Happening? came out with a fascinating and excellent discussion about why the Internet currently sucks, featuring Tim Wu. It’s excellent reading, and I also can’t recommend Wu’s books enough.
- Secondly, I had conversation with a friend about blogs, of all things, and the current state of the Internet. During the course of this, I pointed out that it’s probably never been easier to own a little corner of the Internet and put whatever you want on there, and that “it’s yours for the taking”. I’m a sucker for a well-used idiom, but I’ve been mulling it over, and it’s not quite as accurate as I’d like.
I’ll come back for the idiom later. Until then, please enjoy this long rambling collection of my thoughts on the matter.
Algorithims Rule Everything Around Us (If You Let Them)
The World Wide Web as we know it was originally perceived as a particpatory medium. For many, this promise wasn’t fully realised until the advent of social media platforms. These platforms are also the source of the Internet’s current structural flaws and foundational problems. At the same time, the very same platforms are enablers for much of the good that comes out of this stupid network of computers. We’d probably be better off as a society if Facebook was broken up into its constituent parts, but we’d be worse off if the functionality was lost forever.
The rise of algorithmic timelines is a menace. On some sites, it just can’t be avoided – Instagram is very unlikely to bring back the chronologically-ordered feed of posts. For your general web browsing, however, it is possible to escape the algorithmic feed of whatever Facebook wants you to read: just go visit websites. Foster Kamer wrote an excellent piece last year about this very topic.
Interent users of a certain vintage, your humble author included, remember the glory days of Google Reader fondly. We mourn its passing, but it’s still possible to get that experience and create your own newsfeed, from just the websites you’re interested in. RSS readers are still alive and well for computers, phones, and tablets. I’ve been using an open-source project called CommaFeed to bring back that Reader feeling, and if you’ve got a bit of server space, locally or out on the Internet, it’s a great service.
This Too Shall Pass
This isn’t the first time someone’s written this kind of thought. Marco Arment said it better all the way back in 2011:
If you care about your online presence, you must own it.
Everything on the Internet feels permanent, right until the moment it doesn’t. It’s not the solution for everyone, but if you really care about the things you create or how you appear, hosting your own website is the best way to preserve it. Web services come and go. Medium didn’t exist 5 years ago, and in 5 years it may not exist anymore. The Internet Archive, blessed and comptetent as they are, is not a valid backup or recovery strategy for catastrophic platform existence failure.
The Point I’m Trying To Get To Here
A better, kinder, and more open Internet is possible. It’s within our grasp – but we have to make it. This is the same guiding principle behind free/open-source software development: everyone’s able to contribute and make improvements according to their needs. Wikipedia would never work without these principles of openness and creativity, and its success is a testament to the fact that it can work.
With a few exceptions, blogs started to die out around the time Facebook really started to gain in popularity. The primary metric by which a website’s success is judged has become the length of time spent on the site, but the length of the content keeps getting smaller. Catchy videos took the place of long text diatraibes, and gargantuan multi-hundred-megabyte web pages became the norm. Website bloat is a known and serious issue, and nearly all of it can be attributed to useless bullshit.
You might assume, at this point, that I’m feeling pretty bleak and hopeless about the future of the Web. This is a safe assumption to make, but I’m not fully despondent yet. There is still good out there, even if it’s a little more hard to find than it perhaps once was. Even better, you can make meaningful changes to your Internet experience and start rediscovering the joy in an afternoon or two.
Five steps to a better Internet:
- Run a good ad blocker. You might even notice that your computer’s running better without the weight of all those ads.
- Support people who make things you like. Patreon, flaws and all, is probably one of the best things to have happened for people who create things online. Some of the best journalism out there is behind a paywall. In the current political climate, supporting a free press has never been more important.
- Be intentional. Visit websites and services you find interesting, and make a point to consciously go there instead of being led somewhere by the whims of an algorithim-based recommendation. While you’re at it, stop reading comment sections. With very rare exceptions, a public forum for people to tack their thoughts on to the end of other people’s content will inevitably result in unpleasnatries.
- Consider renting a server. Virtual servers can be had for $5 a month, and although they’re not powerful, it’s more than enough to host a website like this one. A small server is an excellent starting point, but once you’ve discovered the possibilites opened up by having your own server, you may end up wanting an upgrade to something more powerful. This isn’t a prequisite, but you may find it useful if you decide to…
- Make something! Record a song, learn to code, take some photos, write a shitty blog like this one. If you feel like something’s missing from the Internet, you could be the one to put it there.
No great work or human endeavour was accomplished without at least a little effort. The computing revolution offered hope that the level of effort required would be drastically reduced and made accessible to everyone. The dream is murky, but it’s still out there. I wrote this pompous-ass manifesto because I felt inspired to start doing something with this blog I set up and that I’ve been trying to do something meaningful with. My hope for myself is that I’ll keep writing and keep sharing it.
In conclusion, here’s that idiom from earlier that I’ve been thinking about:
The Internet is yours for the taking, but it is also yours for the making.
You may recall (or have searched for) this old post, where I documented the process needed to get Linux booting on a Skylake-based iMac. Since that post, a few things have changed, and it’s about time for an updated post now that I’ve had a few minutes to try some things out.
Luckily, unlike last time, most recent versions of popular Linux distributions come with shiny new kernels. I’ve tested this with Fedora 27 and Ubuntu 17.10 – I fully expect Arch, or any distro with a default kernel of 4.13 or greater, to work similarly. Here’s the process:
Optional: Use Disk Utility to shrink the Mac OS partition and make some unallocated free space for Linux. You don’t have to do this, but it may come in handy if Apple releases a firmware update. I’d also recommend leaving a small Mac OS partition if you’re using Ubuntu – see the footnote below.
Prepare a USB flash drive using the installation ISO of your choice.
Hold down the
Option key while booting up your Mac, and select the flash drive. It may show up as
e at the installer’s GRUB menu to edit the boot parameters. You’ll need to add this to the end of the line that usually ends with
quiet splash or similar:
Press F10 to continue booting up. Only one CPU core will be visible to the system as a result of these kernel flags – this is temporary, however. Full performance will be restored later.
Continue the installation process – if you’re using the whole SSD and not planning to dual-boot Mac OS, feel free to let the Linux installer do what it wants with the whole drive. If you’re are dual-booting, you may need to configure the partition layout manually. Be sure to make a small (512MB or so) partition for
/boot, where the bootloader will be installed.
After installation completes, reboot into the new Linux installation. At the GRUB screen, press
e and add the following kernel flags:
irqpoll no_timer_check nomodeset
Press F10 to continue booting.
After installation, edit the GRUB default configuration to include the kernel flags from step 5 by default. This varies based on distribution – the defaults are typically located in
/etc/default/grub. Do note that the Fedora installer includes any kernel flags set in the installation ISO by default, so you’ll need to remove them from the file.
In my testing, this setup persists and remains stable after installation and updates. Fedora 27, for example, was updated from kernel 4.13.9 to 4.14.14 as part of the standard system update process, and the kernel flags were retained and continue to work.
I can’t deal with iTunes anymore. It’s finally become too terrible to use on a regular basis. This, along with the impending failure of the hard drive in my beloved iPod classic, led to my purchase of a new MP3 player. My choice was the Sony Walkman NW-A35, which has Bluetooth and plays FLACs and accepts a microSD card – it does all the things you’d want an MP3 player to do nowadays. It also has a headphone jack.
My two gripes with it:
- It uses a weird proprietary cable for charging and data transfer.
- The software utilities provided for transferring files and playlists to it are, not to put too fine a point on it, fucking useless.
One of these problems is easily fixed though, since the internal storage and the SD card both show up as normal mass storage devices when you plug it in to a computer. Here’s where the real fun begins: my desktop music player/library of choice post-iTunes is Clementine, an excellent open-source piece of software that does everything I want it to do. The playlist files it generates, however, are based on absolute paths to my music library. Even more frustratingly, for some unknown reason on my Mac it refuses to actually copy files to the Walkman. Instead of troubleshooting this issue, I wrote a quick Bash script to copy files to an arbitrary destination while preserving the artist>album folder hierarchy, and then creates a new M3U playlist file with relative paths suitable for use in my MP3 player. I suspect it’ll work for other MP3 players too, although I don’t have any to test.
You can find the script I wrote,
m3umangle, at this GitHub repository. Pull requests are accepted and appreciated.
If you’re using the excellent netdata for server monitoring and want to stick the data in a database for long-term storage and pretty graphs, this generally works. If you’re feeling fancy, you can set up retention policies in InfluxDB – this post assumes you’ve already got a working InfluxDB + Grafana stack set up somewhere. Kudos to this blog post, which is referenced by the netdata documentation.
Add this section to your
netdata.conf file, replacing
$INFLUXDB with a more relevant IP address:
host tags = $TAG
enabled = yes
data source = average
type = opentsdb
destination = tcp:$INFLUXDB:$PORT
prefix = $PREFIX
hostname = $HOSTNAME
update every = 10
buffer on failures = 10
timeout ms = 20000
# send names instead of ids = yes
# send charts matching = *
Over on your InfluxDB server, make a database for all this data to end up in, and make sure you’ve got an OpenTSDB listening service in
enabled = true
bind-address = ":$PORT"
database = "opentsdb"
Once you’ve restarted both services, netdata should be happily filling your InfluxDB server with data. You can grab an example Grafana dashboard in JSON format here – just be sure to replace
$PREFIX with the prefix you set in the netdata config file, and
$HOSTNAME with the hostname. It should start showing data immediately.
This post has been superseded by an updated version, available here.
After bashing my head against the metaphorical wall for a week or so, I have finally managed to install Linux on a 27” Retina iMac with Skylake. This is what I’ve learned, and it might help you if you’re trying to do the same thing.
The usual procedure applies here, if you’ve installed Linux on an Intel Mac before. This method doesn’t require you to install any additional software like rEFInd, although you certainly can if you want to. It might be easier if you plan on triple-booting with Windows as well.
Bottom line, this is going to work best with Linux kernel 4.6 or later. This is due to both Skylake support that exists in that kernel, and the AMDGPU open-source driver, which works very nicely with the R9 graphics cards that retina iMacs ship with. The easiest way to accomplish this is to use a distribution that’s already shipping with kernel 4.6 or greater. I’ve had success with Arch and Fedora with the following:
- Prepare a USB flash drive using the distribution’s live ISO.
- Hold down the
Option key when booting the Mac, and select your flash drive.
Edit the GRUB entry for the live installation environment with the following kernel parameters:
irqpoll radeon.modeset=1 no_timer_check
You may notice that Ctrl-X doesn’t work in GRUB. I don’t know why that is, and would love to know the reason. Press F10 to boot from the GRUB editing screen - some distributions mention F10 in their instructions, and some don’t. It should work universally, though.
- Install Linux as usual to any empty non-partitioned space on your hard drive. It’s okay (and required, if you’re not using rEFInd) to make a small EFI partition for Linux to install the bootloader to. In the future, when switching between operating sytems, you can hold down the
Option key when booting to pick from your available partitions.
- Reboot from the installer into your fresh Linux installation. You’ll need to edit the GRUB entry using the parameter from step 3 again (for the last time, I swear!)
- Once you’ve booted into Linux, edit your GRUB configuration files to include the parameters from step 3 by default. Regenerate the GRUB files using your distro’s tools, and reboot one last time to make sure it works. If it doesn’t work, you can always edit the entry at boot time again to get back into the system.
Distributions with old kernels
The following notes are especially useful for people who want to install Ubuntu – but the basic concept is the same for any distribution you like that doesn’t yet come with kernel 4.6 or newer. Basically, you’re doing the same thing as above, except with some different kernel parameters to get the OS to boot. Once the OS is installed, you’ll be installing kernel 4.6 or greater, and then using the parameters from step 3 above.
Edit the GRUB entry with parameters
acpi=off nomodeset when booting from the installation ISO. These are bare-bones parameters that only show 1 core and will deliver low performance, but this isn’t an issue for the installation process. Full performance will be restored later.
After installation and rebooting, install kernel 4.6. Most distributions provide some way of obtaining mainline Linux kernels that are pre-packaged for the distro’s package manager. For example, you can grab 4.6.3 (the newest kernel at time of writing) from Ubuntu’s mainline kernel PPA and install it with apt. Once it’s installed, edit the GRUB config with those parameters from step 3 above, and you should be all set.
Screw you, Canonical
Unity won’t work without the GPU drivers installed. This means that the normal Ubuntu installer ISO will not work. Point blank. Don’t bother. Unity uses Compiz as its rendering engine, and it won’t load unless it’s happy with the GPU driver situation. This means the installer itself won’t load either. Good job, guys.
To install Ubuntu, grab the Xubuntu installer. (Any desktop environment that doens’t use Compiz by default will work.) After you’ve installed the OS, feel free to install the DE of your choice.
As of today (2016-07-11), this information should be correct. Future kernel versions will hopefully remove the need for some of these settings. Until that time, good luck!