Home Network

I did some home networking! Four years ago. I installed some networking equipment in a closet, ran a bunch of Ethernet cable, and ceiling-mounted a Wi-Fi access point. The closet box looks like this (click pics for a bigger size):

Closet networking cabinet with the door closed. A photo of a large structured media enclosure filled with various pieces of networking equipment.

And here’s an annotated version:

Closet networking cabinet with the door open and contents labelled.

But let’s back up a bit…

Why???

Two primary reasons.

The first is that our router was in our living room and I wanted it to be less conspicuous.

The second is that I wanted to run Ethernet cable because I don’t like Wi-Fi. Before this house I lived in semi-dense apartments and condos for fifteen years and sometimes had Wi-Fi problems. Nothing terrible, just an occasional annoyance. Things like Wi-Fi speakers or a Nest camera dropping offline, presumably due to lots of access points using the same limited spectrum (using a phone app to view Wi-Fi networks would show a handful of networks with strong signals).

Also, it’s maybe silly but I still have a Linux desktop computer and it’s nice to not have to spend any effort making Wi-Fi work in Linux.

What About Mesh Networks or Range Extenders?

Mesh networks are all the rage these days. I don’t have experience with them. I suspect they work well. They do use additional bandwidth, though, because data is transmitted over multiple hops. And yeah, hopefully transmitted at lower power and that should reduce interference. But there’s still overall more data transmission.

Basically nothing beats a physical Ethernet connection. And any traffic you can remove from the wireless network will increase the experience of the remaining wireless devices because there’s less contention.

My house has a fairly accessible crawlspace and attic, so running cable isn’t terrible. And I’m planning on living here for a while so it seemed worth the effort.

So What Exactly Did I Do?

The picture above shows a structured media enclosure (or structured media center). It’s a big box that you can mount stuff in. This one is sized to fit between 16″ studs. I cut a big rectangular hole in the drywall in a downstairs closet and screwed it to the studs. I had an electrician wire up two duplex power receptacles. The cabinet contains the fiber to Ethernet media converter (called an optical network terminal), our router, an Ethernet switch, a SmartThings hub, and an Ethernet punch-down block.

I ran our incoming fiber cable to the cabinet and I ran four Cat 6A Ethernet cables from the cabinet to the office and four to the upstairs (TV in bonus room, an access point, and two spares). I luckily found an existing conduit running between the attic and the crawlspace that only had a few coax cables in it. I was able to reuse this for Ethernet cable. I loosely suspended the cable with velcro ties nailed to studs. I drilled holes up through the floor bottom plate or down through the ceiling top plate into the specific stud cavity where I wanted the Ethernet jack. I used Great Stuff fireblock foam to fill the holes after running the cable.

There Were So Many Little Decisions!

Making these decisions took time, research, and planning.

The Structured Media Enclosure Itself

The two big name brands are Leviton and Legrand. There are other brands but it’s harder to find info about them online. I went with Leviton and I don’t remember why. It’s fine. I suspect Legrand is fine, too. Even with Leviton it was hard for me to get a good idea in my head about how well it would work.

One worry I had was about installing the box when we already had drywall in place. The box comes with tabs that stick out on the sides and you’re supposed to screw them straight into studs before installing drywall (so the tabs go behind the drywall). But eventually I found one of their YouTube videos that demonstrates scoring the tabs with a knife and snapping them off, then screwing into the studs through the sides of the box. This allows the box to sit at the right depth in the wall. So that’s great. Though I wish this info was easier to find. Pet peeve: YouTube how-to videos that could have been a text article with images.

Another worry I had was how to attach things in the box. Both Leviton and Legrand sell mounting brackets and in both cases the mounting brackets are quite expensive for what they are. Like $20 or $30 each, for a little plastic or metal bracket. They’re pretty basic so their profit margin is likely high for these things. It’s kinda justified in that their sales volume is probably low and they have to recoup their development cost and overhead… but it’s still a hard pill to swallow. Here are Leviton’s mounting options.

I got kind of a gigantic box—42″ tall. It seemed bigger than I needed but also I had room for it and I thought the extra space might come in handy. This was a good choice! I found it hard to arrange things compactly. Partially because I tried hard to keep it tidy and tried to keep power cables separated from Ethernet. But also it was just hard to mount things side by side. The flexibility of Legrand’s universal mounting plate might be nice here. But I still have a decent amount of space and I could cram more stuff in there if I wanted.

Putting the SmartThings hub in the closet isn’t great for reception but it seems to mostly work fine. Maybe because I have a Z-Wave range extender/repeater plugged into an outlet outside the closet (Aeotec Range Extender 7, possibly discontinued). I did at least get the enclosure that’s made out of plastic instead of metal.

Ethernet Cable

There’s like a billion different kinds of Ethernet cables. I went with Cat 6A, which seems to offer the most bandwidth while still being a widespread standard. I went with plain old unshielded twisted pair (UTP) because foil shielding seemed unnecessary for my use case. I wanted to use pre-terminated cables because terminating is a pain and I was worried it would be less reliable if I did it myself. But I ended up buying unterminated cable on a spool because the longest terminated Cat 6A cable I could find was 100ft and I estimated that I needed a cable longer than that for the TV. Plus it was hard to estimate accurately and therefore hard to know what lengths to buy. The spool was solid core, which is what I wanted. I suspect most spools are solid core, since spools are probably mostly used for permanent installations. My worries were unnecessary—terminating wasn’t too bad and I think I was able to do a good job. And pulling the connectors+boots of pre-terminated cable through my conduit would have been harder.

I bought “riser-rated” (“CMR”) cable, but any cable rated for in-wall use is fine: CM, CL, CMR, or CMP. I think this page gives a good enough explanation while still being brief.

I chose a blue sheath because in my mind blue gives the biggest hint that “this is data” or “this is Ethernet.” But maybe that’s just me.

Connectors, Faceplates, and Boxes

And that lead to the decision to use a punch-down block and punch-down wall connectors rather than terminating with an RJ45 connector and plugging directly into a switch or into a pass-through wall connector. It seemed like the more professional route and I’m happy with these decisions. Here’s what the punch-down block looks like from the front:

A photo of an CAT 6 RJ-45 punch-down block mounted in a structured media enclosure with eight blue ethernet cables connected.

To access the back, where the cables attach, you unhook the punch-down block from the structured media enclosure and turn it over.

For wall connections I went with keystone faceplates. The alternatives are Decora/decorator faceplates or faceplates with fixed Ethernet ports, but keystone seems like the most common, I like the way they look, and I like the ability to mix and match connections. It was hard to find keystone jacks marketed as Cat 6A. And actually I think my punch-down block is only marketed as Cat 6. I think the “A” in “6A” probably does make a difference for the cable in long cable runs, but in practice I suspect it matters much less for the jacks. Also even Cat 6 is overkill for my needs. But, you know, if you’re going through the effort to put cable in walls then you should use the best cable.

A photo of a wall plate with two keystone slots, each with a blue RJ-45 jack for connecting an ethernet cable. A photo of a wall plate behind a TV wall-mount bracket with three keystone slots: One blue RJ-45 jack for connecting an ethernet cable, one HDMI jack, and one coax jack. A photo of a wall plate behind a stereo receiver with three keystone slots: One blue RJ-45 jack for connecting an ethernet cable, one HDMI jack, and one coax jack.

The second photo is behind the TV and the third photo is behind the stereo receiver below the TV. The HDMI connection runs between them. The coax behind the TV connects to an attic-mounted HDTV antenna. The coax behind the receiver connects to an attic-mounted FM radio antenna.

I used low voltage mounting brackets on interior walls. I used a normal electrical box on the one exterior wall to give some separation from the insulation.

Router

For networking gear I chose to use my own router rather than the one from my ISP because I’m wary of router firmware. I worry that router manufacturers aren’t careful enough and their firmware can be buggy and hackable. We have Google Fiber and I’d guess that Google’s router firmware is above average, but I still thought I could do better. Ubiquiti and Meraki (owned by Cisco) are companies that give me an impression of a high level of security. Maybe Aruba, too. I went with Ubiquiti because it was the personal and professional choice of some smart former coworkers.

Narrowing down which Ubiquiti devices took some time. I went with a UniFi Security Gateway. Configuration takes more work than with more mainstream consumer-focused routers. Those routers typically host the admin website directly on the router. With UniFi the management website is a separate thing. UniFi sells a separate hardware device called a Cloud Key that’s basically a mini computer that hosts the admin website, but that’s one more thing to cram into the structured media enclosure and also it’s $179. Another option is to use a hosted solution, but it seems ridiculous for me to pay $15 or $30 per month for my home network. And using a hosted solution is inherently less secure. Instead I opted to run the configuration service on my own hardware. At first I ran it natively on my Apple laptop but I’ve since switched to running it in a Docker container on my Linux desktop. I wrote up the steps on Super User if you want more info.

It’s been a few years since I did all this work and nowadays there might be better options. For example I think Ubiquiti sells some gateways that include the admin website. The Dream Router and Dream Machine both look like good options if you’re not trying to mount it in a structured media enclosure. And the Dream Machine Pro and Dream Machine Special Edition might be good if you’re mounting in a rack. More info here and here.

Network Configuration

Unfi allows configuration of multiple logical networks, and UniFi access points can act as multiple SSIDs simultaneously. I take advantage of this by having one network where devices are allowed to talk to each other (e.g. our computers and printer) and another, “untrusted” network where devices are not permitted to talk to each other. I love this. I relegate things like my thermostate and car to this network. They don’t need to talk to anything else on my network and having them isolated means they’re not an attack vector into my personal computers.

Access Points

A photo of the ceiling of a residential house showing a Ubiquiti Unifi nanoHD Wi-Fi access point attached to the ceiling. Another decision was how many access points and where to put them. This depends entirely on the layout of the house and my solution isn’t great. Our house has a big open room in the middle with a hall/balcony/catwalk upstairs that’s open to the downstairs. I ended up putting one access point in the middle of this hall and it’s probably sufficient to cover the whole house. I also put an access point in the office with the intent of improving reception for the nearby rooms, which have some walls between them and the upstairs hall access point. But this second access point is probably excessive.

A note about access point placement: It’s better to put them in the middleish rather than at the extreme ends because signal degrades quickly over distance. See this Ars Technica article for more.

I wish Ubiquiti had a smaller product lineup. When I purchased my APs years ago there were thirteen different access points to choose from! That’s not counting mesh or APs that mount to a 1 gang wall box. It was hard to decide between them. It’s down to eight now, apparently, but I still have a hard time believing there are justifiable differences between all of them.

Mounting the AP to the ceiling was hard and I’m not completely satisfied with the solution. See “Mistakes/Learnings” below for more detail.

Switch

UniFi access points are powered with PoE (Power over Ethernet). That’s kind of nice and easy, but it meant I needed a switch that is capable of supplying power over ethernet. I bought a non-Ubiquiti, unmanaged PoE switch. It’s fine but I wish I had spent more money and gotten a single, bigger, managed PoE switch from Ubiquiti. See “Mistakes/Learnings” below for why.

Mistakes/Learnings

  • I should have bought a single, bigger, managed, PoE switch for the structured media enclosure. Or maybe a smaller PoE switch but a bigger managed switch? Using a managed switch would allow putting hard-wired Ethernet devices on the untrusted network. This is nice for IoT (Internet of Things) devices that might be poorly written (e.g. a TV) and a potential attack vector for people to gain access to your home network. And a single switch would have made the enclosure less cluttered.
  • I maybe should have used a rectangular wall box for the ceiling-mounted access point. I used a round old-work ceiling box. My impression is round boxes are the standard for electrical (lights) in ceilings. It seemed like a natural choice to me since access points are round. Putting a rectangular box in the ceiling felt wrong (though I’ve since realized that rectangular boxes might be more common than round boxes for smoke detectors). But for whatever reason it seems like Ubiquiti doesn’t expect round boxes to be used. They sell an access point mounting bracket but it doesn’t support attaching to a round ceiling box. Another problem is that the nanoHD AP is so small that you can see the blue edges of the ceiling box. I don’t remember exactly how I solved this problem, but I think it involved attaching the AP’s included mounting bracket to a blank cover for the round box and cutting a hole in both to be able to feed the Ethernet cable through. The cleanest fix for this problem is probably to replace the round box with a small rectangular box. That’s a significant amount of effort since it requires drywall repair.
  • I should have bought surge receptacles with USB outlets for the structured media enclosure. I could have avoided a separate power adapter for the Google fiber⇔Ethernet converter.
  • I accidentally cut into the drywall a little bit where a stud was when trying to place a box in the office for Ethernet jacks.
  • I overestimated how much Ethernet I would need and also made my lengths too long. Also maybe the fiber cable.
  • I used pestblock Great Stuff Foam to seal one or two of the holes I drilled for running cables. I should have used fireblock for all of them. I don’t know if Fireblock is required by code, but it’s something I worried about later. I mean, it basically doesn’t matter, but if you’re picking one I’d go with Fireblock.
  • When cutting holes for old work boxes, make the hole on the small side. It’s easy to make it bigger but hard to make it smaller.

Full Product List

  • Leviton 49605-42P – 42″ Wireless Structured Media Enclosure with Vented Hinged Door, Plastic, White
  • Leviton 49605-42T – Trim Ring Accessory for 42″ Structured Media Center, Plastic, White. This is a fascia that covers the gap between the outside of the enclosure and the drywall.
  • Leviton 49605-GRM – Grommet Accessory Pack for Structured Media Centers, includes (3) 1″ and (2) 2″ rubber grommets. Works with all enclosure knockouts. I think the enclosure doesn’t come with these&mbdash;they must be purchased separately.
  • Leviton 49605-AFR – Cable Routing Ring with (2) Push Pins. For coiling fiber. This thing is great.
  • Leviton 47605-ACS – J-Box Surge Protective Kit (one duplex blue receptacle). Two of these mounted in the bottom of the enclosure. I had an electrician wire them into an existing circuit.
  • Leviton 47612-DBK – Data Plastic Bracket. These are the main brackets I used for mounting devices. Mostly the mounting holes on the backs of things lined up with the holes in these brackets. I did have to drill holes in one of the brackets for one of the devices. And I had to find some screws and nuts and a local hardware store that were an appropriate size.
  • Leviton 47612-UBK – Universal Shelf Bracket used with Structured Media Center (discontinued). The SmartThings hub is sitting on this. Leviton sells a plastic shelf that could serve the same purpose.
  • Leviton 49605-AST – Saddle Tie Kit with VELCRO®, includes (5) Saddle Ties, 5′ of VELCRO® SoftCinch, Black. I think I used these to strap down power adapters. They’re convenient but look messy.
  • Leviton 47615-NYL – Push-Lock Pins for Structured Media Centers (Bag of 20). Because I broke one of the ones that came with a mounting bracket.
  • Monoprice Entegrade Series 1000FT Cat6A Plus 650MHz UTP Solid, Riser-Rated (CMR), 23AWG, Bulk Bare Copper Ethernet Network Cable, 10G, Blue – Ethernet cable. 1,000 ft was more than I needed, but 500 ft probably would not have been enough. But it did allow me to be generous with my cable lengths in case anyone ever wants to move them, and I still have some cable left over.
  • Monoprice product #7013 – 1-Gang Low Voltage Mounting Bracket
  • Monoprice product #6727 – Wall Plate for Keystone, 2 Hole – White
  • Monoprice product #6729 – Wall Plate for Keystone, 3 Hole – White
  • Monoprice ethernet patch cables – I bought a bunch of these, mostly for the enclosure. Also some white cables for the office because I didn’t have any and they look nice. I bought “Cat6A Ethernet Patch Cable – Snagless RJ45, 550Mhz, STP, Pure Bare Copper Wire, 10G, 26AWG” which appear to be discontinued. STP wasn’t important… I think maybe the only UTP cable they had was “slim” and I wanted to avoid that because it sounds worse.
  • Ubiquiti Unifi Security Gateway (USG) – Discontinued. Maybe the Gateway Lite is the spiritual successor to the USG? I’m not sure how similar they are, or if the Gateway Lite has notches on the back for wall-mounting.
  • Ubiquiti UAP-nanoHD – UniFi nanoHD access point
  • Netgear GS108PP – 8-Port Gigabit Ethernet Unmanaged PoE Switch
Posted in All, Computers | Leave a comment

New Bike Saddle

I had been using a Selle Italia SLK Gel Flow saddle on my road bike almost since I bought the bike in 2011. And it was fine. No major problems. Reasonably comfortable. Fairly light. It was a gift from a friend and former coworker who wasn’t satisfied with it.

Here’s the old saddle:
Photo of a Selle Italia SLK Gel Flow bike saddle with Vanox rails viewed from a front angle.

But I thought I could do better. Pedaling on a trainer helped me realize that rotating my body forward to crouch low in the drops caused some discomfort and I thought a larger pressure relief channel would help. Also the surface of the old saddle was more grippy than I wanted. And I was willing to spend more for a saddle that was lighter than the old saddle.

I looked at a ton of options online, read reviews, and watched some bike fit videos (like this one).

I eventually settled on the Selle Italia SLR Boost Kit Carbonio Superflow, size S3. It was popular and highly rated. I was worried about buying a saddle before trying it, but at some point I realized that it was very similar to my existing saddle, and the differences all seemed like improvements. That was good enough for me to be willing to risk it so in January I pulled the trigger.

Photo of a Selle Italia SLK Boost Kit Carbonio Superflow bike saddle in size S3 viewed from a front angle.

And it’s great! Better in all the ways I hoped it would be. A bigger improvement in comfort than I was hoping for. Even sitting on it upright is more comfortable—I don’t know why. Maybe the cushion has more give? It’s also a bit less rounded so the contact area might be bigger, which would distribute my weight more evenly over a larger surface area. Anyway, so I took a bit of a chance and it worked out great.

Here are a few comparison photos:
Photos of two Selle Italia bike saddles side by side, viewed from the side. An SLK Gel Flow with Vanox rails (on the bottom) and an SLR Boost Kit Carbonia Superflow size S3 (on the top).

Photo of two Selle Italia bike saddles side by side, viewed from the top. An SLK Gel Flow with Vanox rails (on the left) and an SLR Boost Kit Carbonia Superflow size S3 (on the right).

Photo of two Selle Italia bike saddles side by side, viewed from the back and resting on a wooden stool. An SLK Gel Flow with Vanox rails (on the right) and an SLR Boost Kit Carbonia Superflow size S3 (on the left).

Photo of two Selle Italia bike saddles side by side, viewed from the bottom. An SLK Gel Flow with Vanox rails (on the top) and an SLR Boost Kit Carbonia Superflow size S3 (on the bottom).

Photo of a Selle Italia SLK Gel Flow bike saddle with Vanox rails shown on a scale showing a weight of 233 grams.

Photo of a Selle Italia SLK Boost Kit Carbonio Superflow bike saddle in size S3 shown on a scale showing a weight of 125 grams.

Posted in Cycling | Leave a comment

Old La Honda Is On Zwift This Month

There’s a hill that I loved to ride up when I lived in California. The road is Old La Honda in the Santa Cruz mountains west of Palo Alto. Here’s the Strava segment. It’s 2.98mi with 1,255ft elevation gain at an average grade of 8.0%. My PR is 23:22, in 2015. I’d guess that I’ve ridden it maybe 40 times. Maybe more!

It’s picturesque: A quiet road that winds through redwoods. There’s a good picture in this article.

Anyway, Zwift has a thing where they simulate real climbs from around the world. Old La Honda is this month’s climb of the month. There’s no actual video footage or even rendered scenery, which is unfortunate because that’s the best part. It looks like this:

Screenshot of the Zwift Mac application, showing rendered graphics of a cyclist riding on a road with power, heart rate, and time stats at the top. The road is solid orange and there is no scenary.

But if you’re away from the Bay Area and you miss the climb, it might be worth checking it out. You’ll just have to imagine in your mind the towering trees, the switchbacks, the hill sometimes dropping off on one side, the old water towers, occasional deer, and Upenuf Rd.

My Zwift Old La Honda time is 27:19. It’s better than I expected because I’m definitely in worse biking shape than 8 years ago. It makes me wonder if the trainer is a little softer than reality.

Posted in Cycling | Leave a comment

I Bought a Bike Trainer

For the past 20 years I’ve biked or ran about once a week, usually on the weekend. But recently it’s been harder to make time for it. I think my weekends are busier than they used to be.

So I bought a smart bike trainer at the end of September so I can more easily exercise at night or on rainy days if needed. Here’s my setup:

Photo of my bike mounted on the bike trainer in my garage. There's a black mat underneath, shelves on the wall with a fan on one of the shelves, and two 5 gallon Lowe's buckets stacked with a laptop on top.

Yes, it’s very garage-y. The thing that makes it “smart” is that you can connect to it with your phone, laptop, or bike computer and control the resistance. Either with Bluetooth or ANT+, which is a wireless standard for connecting fitness equipment.

So far I’ve been using it with Zwift, which is “a massively multiplayer online cycling training program.” It shows rendered roads and scenery and simulates riding, with flats and hills. Here’s a screenshot I took earlier tonight:

Screenshot of the Zwift Mac application, showing a rendered graphics of a cyclist riding on a road with power, heart rate, and time stats at the top.

The trainer is great. The feel isn’t exactly the same as real riding but it’s fine. I still love being outdoors and this isn’t a substitute, but I do feel like I’m getting a workout and I think I’ll be able to make myself keep using it.

Zwift is fine. So far I mostly don’t care about the multiplayer/social aspect. Rendered video is fine but I might actually prefer real video of pretty roads. There are a few other apps that I’ll probably try at some point (ROUVY, FulGaz, Kinomap, and BKOOL all look interesting), but I’m not in a hurry because my trainer came with a 1 year Zwift subscription.

Details About The Trainer

The specific trainer I got is the Zwift Hub. It’s a repackaged JetBlack VOLT V2. It looks like they don’t sell the “Zwift Hub” anymore—only the slightly different “Zwift Hub One.” The difference is that the Zwift Hub One has a single cog with some sort of virtual shifting thing, whereas the older Zwift Hub uses a full cassette and you shift with your bike’s normal shift levers. It looks like the Wahoo KICKR CORE has come down in price to $599.99 (which is what I paid for the Zwift Hub) and also includes a cassette and 1 year Zwift subscription, so that’s equivalent.

Posted in Cycling | Leave a comment

Phone Mount vs Dedicated Bike Computer for Road Biking

I don’t know why they’re called computers. “Speedometer” seems like a more useful descriptor. Anyway. From 2003 to 2012 I used a super basic bike computer with just speed, distance, and maybe cadence. Very much like this one:

A photo of an old Sigma Sport bike computer attached to bike handlebars showing speed and trip time.

From 2012 to 2023 I used a Garmin Edge 800. It’s way fancier: Gives turn by turn navigation, records the GPS track, has a barometric altimeter, supports wireless speed and cadence sensors (my Trek has a sensor embedded in the left chainstay), etc. And it’s been great.

Photo of a Garmin Edge 800 bike computer attached to my bike handlebars.

Unfortunately the touchscreen no longer detects touches in the bottom left corner. This is a problem because that’s where the “back” button is. So I can’t exit menus. I can still record rides but looking at the map and using turn by turn navigation are basically no-go’s.

I’ve done a few longer rides without mapping and it’s inconvenient. It means I have to stop and take out my phone to figure out where to go. And even then I still miss turns sometimes. It was time to upgrade.

What Did I Try?

Phones are pretty amazing these days and phone mounts are way cheaper than GPS bike computers ($20 instead of >$100) so I gave it a shot. I got this one. For this style mount it’s pretty much as good as it could be. Easy to attach. Sturdy. Holds the phone securely.

Photo of the phone mount attached to my bike handlebars.

But after one ride I don’t like it and I’m planning on switching back to a dedicated computer. There are a few reasons. Roughly listed from minor to important:

– To use my speed/cadence sensor I would need to buy a newer version of the DuoTrap chainstay-mounted sensor that supports Bluetooth (mine is older and only broadcasts using ANT+). I worry that the newer DuoTrap sensor might have worse battery life since it broadcasts using both protocols.
– The phone mount is large and I think it looks goofy.
– My phone’s touchscreen goes right up to the edge of the face. I think the mount was interfering with the touchscreen a little. But it was hard to tell. It did make it hard to tap near the corners, though.
– I had to change my display timeout from 2 minutes to 30 minutes to keep the display on. That’s not secure. Maybe I can find a good app that disables screen lock in use and this would be moot.
– It shakes more than the Garmin, which makes it harder to read quickly.
– Many of my touches were misinterpreted as drags. In part due to the shakiness of the mount but mostly due to general road bumpiness. I think phone touchscreen sensitivity isn’t really calibrated for this.
– I couldn’t find an app I liked. The Garmin displays only the things I care about in a super minimal layout. It’s great. I tried the Strava app but I think it doesn’t do turn by turn navigation. And it doesn’t show speed and distance when the map is showing (my old Edge 800 didn’t either, but I think some newer GPS bike computers can). Maybe there is an app that does a good job with this? I didn’t immediately find one after a little searching and I don’t feel like spending a ton of time installing different apps to find a good one.

What Will I Do Now?

I haven’t bought it yet but the Garmin Edge 1040 is my current top pick. The Hammerhead Karoo 2 is also an option but the fact that it runs Android is actually kind of a turn off for me. It seems like too big of an OS. And I think the battery life is worse, which doesn’t matter now but could matter 10 years from now when the battery has degraded.

I don’t ride much in the winter and it looks like Garmin introduces a new version roughly every two years, which might be next spring. So I’ll wait until spring and buy an Edge 1040 or possibly a newer model then.

Posted in All, Cycling | 6 Comments

Mastodon and ActivityPub

I’ve started using Mastodon. It’s an open alternative to Twitter. “Open” meaning there’s no central controlling entity. Tech-savvy people can run their own server and servers can talk to each other (referred to as federation). If you’re interested in joining you can find a server to join, sign up, and follow me: @mark@kingant.net.

Why???

Part of me has always disliked posting on Facebook and Twitter. They’re walled gardens. I’d rather self host. I like keeping a record of my posts. Twitter allows downloading an archive of your posts and Facebook might, too, but I’d rather control the system of record myself. Also I don’t love being used by a company so they can make money from ads. Wise man once say, “when you’re not paying for the product you are the product.”

And then Elon Musk being a jerk1 drove me to make a change.

The open/federated nature is appealing to me. It’s similar to how email works. And XMPP.

ActivityPub

ActivityPub is the important thing. There are other ActivityPub services that aren’t Mastodon. For example Pixelfed (an Instagram clone) and PeerTube (which is tailored for video sharing). It’s possible for them to be written such that they can interact with each other. For example on Mastodon I can follow a Pixelfed user. It’s great.

And actually I think it’s weird that these services are treated as separate things. Mastodon is just an ActivityPub service that looks like Twitter and Pixelfed is just an ActivityPub service that looks like Instagram. I’m not even sure the functionality is very different. It makes me wonder if they could be written as different UIs atop a common server. But whatever, I’m not super familiar with it. Maybe there are good reasons for them to be separate.

Running My Own Server

And because I’m a huge nerd I’m running my own server. I did it partially as a learning exercise. But also now my ID can be my email address (“@mark@kingant.net“) instead of something like “@markdoliner@someone-elses-server.example.com”

It took some work. The developers haven’t prioritized ease of installation (which is perfectly reasonable). It runs as two processes: The web/API service and a background job queue worker. It also requires PostgreSQL and Redis. As mentioned previously, I use Salt to manage my cloud servers. Initially I tried running everything directly on an EC2 instance, but Mastodon is written in Ruby and installing and managing Ruby and building Mastodon via Salt rules is a lot of work.

There are official Docker images so I looked into that instead. The docker-compose.yml file was immensely helpful. At first I tried running containers in Amazon Elastic Container Service (ECS). But configuring it is way more elaborate than you would imagine. And I’m not sure but it might have required paying for separate EC2 instances per container, though maybe it’s possible to avoid that if you go a step further and use Amazon Elastic Kubernetes Service (EKS). But of course that’s even more effort to configure.

What I did instead is run the containers myself on a single EC2 instance. Three containers, specifically: web/API, background job queue worker, and Redis. I’m running PostgreSQL directly on the host because it’s super easy. Obviously I could have used RDS for PostgreSQL and that’s certainly a nice managed option, but it’s also more expensive. I wanted to run Redis on the host but it’s hard to configure it to allow access from the Mastodon containers but disallow access from outside networks. Though even with PostgreSQL I ended up configuring it to accept connections from the world (disallowed via AWS security group of course, and PostgreSQL is configured such that users can only authenticate from localhost or the Docker network). So this feels like a decent compromise. The maintenance overhead is manageable and it’s fairly cheap. I’m paying around $3 to $4 per month ($83 up front for a t4g.micro 3 year reservation plus $1 to $2 per month for EBS storage).

I don’t want to make my Salt config repo public but here are the states for the Docker containers in case it helps anyone:

include:
  - mastodon/docker
  - mastodon/user

# A Docker network for containers to access the world.
mastodon_external:
  docker_network.present:
    - require:
      - sls: mastodon/docker

# A Docker network for intra-container communication.
mastodon_internal:
  docker_network.present:
    - internal: True
    - require:
      - sls: mastodon/docker

# Create directory for Mastodon. We leave ownership as the default
# (root) because the mastodon user shouldn't need to create files here
# and using root is a little more restrictive. It prevents the Mastodon
# processes (running in Docker as the mastodon user) from causing
# mischief (e.g. modifying the static web files, for example).
/srv/mastodon.kingant.net:
  file.directory: []

# Create private directory for Mastodon Redis data.
/srv/mastodon.kingant.net/redis:
  file.directory:
    - user: mastodon
    - group: mastodon
    - mode: 700
    - require:
      - sls: mastodon/user

# Redis Docker container.
# We're running it as a Docker container rather than on the host mostly
# because we would need to change the host Redis's config to bind to the
# Docker network IP (by default it only binds to localhost) and that
# requires modifying the config file from the package (there is no
# conf.d directory). That means we would stop getting automatic config
# file updates on new package versions, which is unfortunate.
# Of course if we ever want to change any other config setting then
# we'll have the same problem with the Docker contain. Though it's maybe
# still slightly cleaner having Redis accessible only over the
# mastodon_internal network, because Redis isn't big on data isolation.
# The options are:
# 1. Use "namespaces." Meaning "prefix your keys with a namespace string
#    of your choice, maybe with a trailing  colon."
# 2. Use a single server but have different apps use different DBs
#    within Redis. This is a thing... but seems problematic because
#    the DBs are numbered sequentially so what happens if you remove one
#    in the middle?
# 3. Use separate servers. This probably makes the most sense (and
#    provides the strongest isolation).
redis:
  docker_container.running:
    - image: redis:7-alpine
    - binds:
      - /srv/mastodon.kingant.net/redis:/data:rw
    # Having a healthcheck isn't important for Redis but it existed in
    # Mastodon's example docker-compose file so I included it here. The
    # times are in nanoseconds (currently 5 seconds).
    - healthcheck: {test: ["CMD", "redis-cli", "ping"], retries=10}
    - networks:
      - mastodon_internal
    - restart_policy: always
    # The UID and GID are hardcoded here and in user.present in
    # mastodon/user.sls because it's really hard to look them up. See
    # https://github.com/saltstack/salt/issues/63287#issuecomment-1377500491
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/redis

# Create directory for Mastodon config file.
/srv/mastodon.kingant.net/config:
  file.directory: []

# Install the Mastodon config file. It's just a bunch of environment
# variables that get mounted in Docker containers and then sourced.
# Note: Secrets must be added to this file by hand. See the "solution
# for storing secrets" comment in TODO.md.
/srv/mastodon.kingant.net/config/env_vars:
  file.managed:
    - source: salt://mastodon/files/srv/mastodon.kingant.net/config/env_vars
    - user: mastodon
    - group: mastodon
    - mode: 600
    # Do not modify the file if it already exists. This allows Salt to
    # create the initial version of the file while being careful not to
    # overwrite it once the secrets have been added.
    - replace: False
    - require:
      - file: /srv/mastodon.kingant.net/config

# Docker container for Mastodon web.
mastodon-web:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && rm -f /mastodon/tmp/pids/server.pid && bundle exec rails s -p 3000"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
      - /srv/mastodon.kingant.net/www/system:/opt/mastodon/public/system:rw
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "wget -q --spider --proxy=off http://localhost:3000/health || exit 1"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - port_bindings:
      - 3000:3000
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

# Docker container for Mastodon streaming.
mastodon-streaming:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && node ./streaming"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "wget -q --spider --proxy=off http://localhost:4000/api/v1/streaming/health || exit 1"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - port_bindings:
      - 4000:4000
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

# Docker container for Mastodon sidekiq.
mastodon-sidekiq:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && bundle exec sidekiq"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
      - /srv/mastodon.kingant.net/www/system:/opt/mastodon/public/system:rw
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "ps aux | grep '[s]idekiq\ 6' || false"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

Future Thoughts

I’d love to see more providers offering ActivityPub-as-a-service. There are some already, but they’re a bit pricey. And it feels like an immature product space. I could see companies wanting to use their own domain for their support and marketing accounts. e.g. “@support@delta.com” instead of something like “@deltasupport@mastodon.social”

I’d love to see a lighter-weight alternative to Mastodon that is targeted toward small installations like me. Like removing Redis as a requirement. Maybe getting rid of on-disk storage of files and just using the DB. Doing a better job of automatically cleaning up old, cached meda. Maybe this is Pleroma. Or maybe one of the other options.

Footnotes

  1. Ohhhh I think I don’t want to try to gather references. The Twitter Under Elon Musk Wikipedia page lists some things. This possibly-paywalled Vanity Fair article covers a few pre-Twitter things. There’s this exchange on Twitter where a guy was trying to figure out if he had been fired, and he sounds like a pretty good guy! And Elon was super rude: Thread one, thread two, and thread three. Labeling NPR as “state-affiliated media,” the same term used for propaganda accounts in Russia and China.
Posted in All, Computers | Leave a comment

Let’s Encrypt is Fantastic

Apparently I haven’t posted this anywhere so I just want to go on record: Let’s Encrypt is amazing and fantastic. In five years the percentage of web pages loaded by Firefox using HTTPS went from 30% to 80%. That’s huge! A really impressive accomplishment1. And ACME, the protocol for validating domain ownership, is also great. A massive improvement over the old process of getting certificates. And all those certificates are only valid for 90 days, instead of the old standard of a year or more! So good.

A Quibble

I don’t like how Certbot is configured. To keep things in perspective let me say that this is just a minor usability complaint. It doesn’t stop Let’s Encrypt from being fantastic.

Certbot is a tool for obtaining certificates from Let’s Encrypt for https and other services. The configuration file is written automatically when the certbot command is run. Users are discouraged2 from modifying the configuration file directly. I can’t recall any other tool with this behavior. I don’t get it. Like, why? Sysadmins are used to writing config files, why is it important for this to be different? Maybe it’s supposed to be easier for users? But is it?

I guess usually this behavior is fine and not a problem. I find it inconvenient because I manage my two cloud servers using Salt (a configuration-as-code tool). It’s easy to place a config file on the server and apply changes to it. But since that’s discouraged I settled on running the certbot command but only if the config file does not exist. This is what I did, in case it helps anyone:

{% macro configure_certbot(primary_domain, additional_domains = none, require_file = none) %}
{#
  @param require_file We use Certbot's "webroot" authentication plugin.
         Certbot puts a file in our web server's root directory. This
         means we need a working web server. This parameter should be a
         Salt state that ensures Nginx is up and running for this
         domain.
#}
{# Don't attempt this in VirtualBox because it will always fail. #}
{% if grains['virtual'] != 'VirtualBox' %}
run-certbot-{{ primary_domain }}:
  cmd.run:
    - name: "certbot certonly
        --non-interactive
        --quiet
        --agree-tos
        --email mark@kingant.net
        --no-eff-email
        --webroot
        --webroot-path /srv/{{ primary_domain }}/www
        --domains {{ primary_domain }}{{ ',' if additional_domains else '' }}{{ additional_domains|join(',') if additional_domains else '' }}
        --deploy-hook 'install --owner=root     --group=root     --mode=444 --preserve-timestamps ${RENEWED_LINEAGE}/fullchain.pem /etc/nginx/{{ primary_domain }}_cert.pem &&
                       install --owner=www-data --group=www-data --mode=400 --preserve-timestamps ${RENEWED_LINEAGE}/privkey.pem   /etc/nginx/{{ primary_domain }}_cert.key &&
                       service nginx reload'"
    # Certbot saves the above arguments into a conf file and
    # automatically renews the certificate as needed so we only need to
    # run this command once. This would need to be done differently if
    # we ever want to change the renewal config via Salt. The Certbot
    # docs[1] describe using --force-renewal to force the renewal conf
    # file to be updated. So we could figure out a way to do that once
    # via Salt. Or we could manage the renewal conf file directly via
    # Salt.
    #
    # [1] https://eff-certbot.readthedocs.io/en/stable/using.html#modifying-the-renewal-configuration-of-existing-certificates
    - unless:
      - fun: file.file_exists
        path: /etc/letsencrypt/renewal/{{ primary_domain }}.conf
    - require:
      - pkg: certbot
      - file: {{ require_file }}
{% endif %}
{% endmacro %}

{{ configure_certbot('kingant.net', additional_domains=['www.kingant.net'], require_file='/etc/nginx/sites-enabled/kingant.net') }}

It’s a bit messy and I don’t know what I’ll do when I need to change the config. If you want certbot to update the config file you have to use --force-renewal, but I certainly wouldn’t want to do that every time I apply my configuration state to my servers. I think I’ll have to either run certbot --force-renewal by hand (fine, but loses the benefits of configuration-as-code), or have Salt manage the config file (discouraged by the official docs). Either option is fine, it just feels like a dumb problem to have.

I’m not the only one who has been inconvenienced by this. A quick search turned up this question thread and this feature request ticket.

Anyway, but Let’s Encrypt really is fantastic! This one usability complaint for my atypical usage pattern is super minor.

Footnotes

  1. Yeah sure, Let’s Encrypt isn’t solely responsible. There had been a push to encrypt more sites post-Snowden anyway (e.g. Cloudflare in 2014). And there’s no way to know how big of an impact Let’s Encrypt had. Buuuuut, I bet it was huge. And yeah, it wasn’t Let’s Encrypt by themselves, hosts like Squarespace, DigitalOcean, and WP Engine have also done their part.
  2. “it is also possible to manually modify the certificate’s renewal configuration file, but this is discouraged since it can easily break Certbot’s ability to renew your certificates.”
Posted in All, Computers | Leave a comment

Using Salt to Manage Server Configuration

Background

I’ve been using Salt (for clarity and searchability it’s also sometimes referred to as Salt Project or Salt Stack) to manage the configuration of my web server since 2014. It’s similar to Puppet, Chef (I guess they call it “Progress Chef” now), and Ansible.

At Meebo we used Puppet to manage our server config. This was like maybe 2008 through 2012. It was ok. I don’t remember the specifics but I felt like it could have been better. I don’t remember if there were fundamental problems or if I just felt that it was hard to use.

Anyway, when we chose the configuration management tech at Honor in 2014 we looked for a better option. We made a list of the leading contenders. It was a toss up between Salt and Ansible. They both seemed great. I don’t remember why we chose Salt. Maybe it seemed a little easier to use?

I Like It

I started using it for my personal web server around the same time. I’ve been happy with it. The main benefit is that it’s easier to update to a newer version of Ubuntu LTS every 2 or 4 years. My process is basically:

  1. Use Vagrant to run the new Ubuntu version locally. Tweak the config as needed to get things working (update package names, change service files from SysV init to systemd, etc.), and test.
  2. Provision a new EC2 instance with the new Ubuntu version. Apply the Salt states to the new server.
  3. Migrate data from the old server to the new server and update the Elastic IP to point to the new server.
  4. Verify that everything is good then terminate the old server.

It’s an upfront time investment to write the Salt rules, but it makes the above migration fairly easy. I run it in a “masterless” configuration, so I’m not actually running the Salt server. Rather, I have my server-config repo checked out on my EC2 instance.

Weaknesses

Salt does have weaknesses. Since the beginning I’ve felt that their documentation could be more clear. It’s hard for me to be objective since I’ve been using it for so long, but here are a few minor gripes:

  • Some of the terminology is unclear. For example, I think the things in this list are typically referred as “states,” but the top of that page calls them “state modules” even though there is a different set of things that are called modules. Additionally the rules that I write to dictate how my server should be configured are also referred to as states, I think? And it’s not clear what modules are or when or how you would use them. There are often modules with the same name as a state but with different functions.
  • This page about Salt Formulas has a ton of advice about good conventions for writing Salt states. That’s great, but why is it on this page? Shouldn’t it be in the “Using Salt” section of the documentation instead of here, in the documentation intended for people developing Salt itself?
  • Navigating the Salt documentation is hard. Additionally there’s at least one item that’s cut off at the bottom of the right hand nav menu on this page. The last item I see is “Windows” but I know there is at least a “Developing Salt” section. Possibly also Release Notes, Venafi Tools for Salt, and Glossary.
  • The term “high state” feels unnatural to me. I think it has some meaning and if you understand how Salt works then maybe there’s a moment of clarity where the pieces fit together. But mostly it feels jargony.
  • It’s hard to have a state key off the result of an arbitrary command that runs mid-way through applying your configuration. There’s a thing called “Slots” that tries to solve this problem but it’s hard to use.
  • Also, speaking of Slots, why is it called “Slots”? And why is the syntax so awkward? Also I found the documentation hard to read. Partially because it feels jargony. Also there’s some awkward repetition (“Slots extend the state syntax and allows you to do things right before the state function is executed. So you can make a decision in the last moment right before a state is executed.”) and clunky grammar (“There are some specifics in the syntax coming from the execution functions nature and a desire to simplify the user experience. First one is that you don’t need to quote the strings passed to the slots functions. The second one is that all arguments handled as strings.”).

Summary

So I’m happy with it and intend to keep using it. I suspect other options have their own flaws. I have a vague impression that maybe Ansible is a little more widely-used, which is useful to consider.

Also the modern approach to running your own applications in the cloud is:

  • Build Docker containers for your applications.
  • Either deploy your containers directly to a cloud provider like Amazon Elastic Container Service, or deploy them using Kubernetes.

So there’s less of a need to do configuration management like this on hosts. But it’s probably still valuable for anything you don’t want to run in a container (mail server? DB server? VPN server?).

Posted in Computers | Leave a comment