Old La Honda Is On Zwift This Month

There’s a hill that I loved to ride up when I lived in California. The road is Old La Honda in the Santa Cruz mountains west of Palo Alto. Here’s the Strava segment. It’s 2.98mi with 1,255ft elevation gain at an average grade of 8.0%. My PR is 23:22, in 2015. I’d guess that I’ve ridden it maybe 40 times. Maybe more!

It’s picturesque: A quiet road that winds through redwoods. There’s a good picture in this article.

Anyway, Zwift has a thing where they simulate real climbs from around the world. Old La Honda is this month’s climb of the month. There’s no actual video footage or even rendered scenery, which is unfortunate because that’s the best part. It looks like this:

Screenshot of the Zwift Mac application, showing rendered graphics of a cyclist riding on a road with power, heart rate, and time stats at the top. The road is solid orange and there is no scenary.

But if you’re away from the Bay Area and you miss the climb, it might be worth checking it out. You’ll just have to imagine in your mind the towering trees, the switchbacks, the hill sometimes dropping off on one side, the old water towers, occasional deer, and Upenuf Rd.

My Zwift Old La Honda time is 27:19. It’s better than I expected because I’m definitely in worse biking shape than 8 years ago. It makes me wonder if the trainer is a little softer than reality.

Posted in Cycling | Leave a comment

I Bought a Bike Trainer

For the past 20 years I’ve biked or ran about once a week, usually on the weekend. But recently it’s been harder to make time for it. I think my weekends are busier than they used to be.

So I bought a smart bike trainer at the end of September so I can more easily exercise at night or on rainy days if needed. Here’s my setup:

Photo of my bike mounted on the bike trainer in my garage. There's a black mat underneath, shelves on the wall with a fan on one of the shelves, and two 5 gallon Lowe's buckets stacked with a laptop on top.

Yes, it’s very garage-y. The thing that makes it “smart” is that you can connect to it with your phone, laptop, or bike computer and control the resistance. Either with Bluetooth or ANT+, which is a wireless standard for connecting fitness equipment.

So far I’ve been using it with Zwift, which is “a massively multiplayer online cycling training program.” It shows rendered roads and scenery and simulates riding, with flats and hills. Here’s a screenshot I took earlier tonight:

Screenshot of the Zwift Mac application, showing a rendered graphics of a cyclist riding on a road with power, heart rate, and time stats at the top.

The trainer is great. The feel isn’t exactly the same as real riding but it’s fine. I still love being outdoors and this isn’t a substitute, but I do feel like I’m getting a workout and I think I’ll be able to make myself keep using it.

Zwift is fine. So far I mostly don’t care about the multiplayer/social aspect. Rendered video is fine but I might actually prefer real video of pretty roads. There are a few other apps that I’ll probably try at some point (ROUVY, FulGaz, Kinomap, and BKOOL all look interesting), but I’m not in a hurry because my trainer came with a 1 year Zwift subscription.

Details About The Trainer

The specific trainer I got is the Zwift Hub. It’s a repackaged JetBlack VOLT V2. It looks like they don’t sell the “Zwift Hub” anymore—only the slightly different “Zwift Hub One.” The difference is that the Zwift Hub One has a single cog with some sort of virtual shifting thing, whereas the older Zwift Hub uses a full cassette and you shift with your bike’s normal shift levers. It looks like the Wahoo KICKR CORE has come down in price to $599.99 (which is what I paid for the Zwift Hub) and also includes a cassette and 1 year Zwift subscription, so that’s equivalent.

Posted in Cycling | Leave a comment

Phone Mount vs Dedicated Bike Computer for Road Biking

I don’t know why they’re called computers. “Speedometer” seems like a more useful descriptor. Anyway. From 2003 to 2012 I used a super basic bike computer with just speed, distance, and maybe cadence. Very much like this one:

A photo of an old Sigma Sport bike computer attached to bike handlebars showing speed and trip time.

From 2012 to 2023 I used a Garmin Edge 800. It’s way fancier: Gives turn by turn navigation, records the GPS track, has a barometric altimeter, supports wireless speed and cadence sensors (my Trek has a sensor embedded in the left chainstay), etc. And it’s been great.

Photo of a Garmin Edge 800 bike computer attached to my bike handlebars.

Unfortunately the touchscreen no longer detects touches in the bottom left corner. This is a problem because that’s where the “back” button is. So I can’t exit menus. I can still record rides but looking at the map and using turn by turn navigation are basically no-go’s.

I’ve done a few longer rides without mapping and it’s inconvenient. It means I have to stop and take out my phone to figure out where to go. And even then I still miss turns sometimes. It was time to upgrade.

What Did I Try?

Phones are pretty amazing these days and phone mounts are way cheaper than GPS bike computers ($20 instead of >$100) so I gave it a shot. I got this one. For this style mount it’s pretty much as good as it could be. Easy to attach. Sturdy. Holds the phone securely.

Photo of the phone mount attached to my bike handlebars.

But after one ride I don’t like it and I’m planning on switching back to a dedicated computer. There are a few reasons. Roughly listed from minor to important:

– To use my speed/cadence sensor I would need to buy a newer version of the DuoTrap chainstay-mounted sensor that supports Bluetooth (mine is older and only broadcasts using ANT+). I worry that the newer DuoTrap sensor might have worse battery life since it broadcasts using both protocols.
– The phone mount is large and I think it looks goofy.
– My phone’s touchscreen goes right up to the edge of the face. I think the mount was interfering with the touchscreen a little. But it was hard to tell. It did make it hard to tap near the corners, though.
– I had to change my display timeout from 2 minutes to 30 minutes to keep the display on. That’s not secure. Maybe I can find a good app that disables screen lock in use and this would be moot.
– It shakes more than the Garmin, which makes it harder to read quickly.
– Many of my touches were misinterpreted as drags. In part due to the shakiness of the mount but mostly due to general road bumpiness. I think phone touchscreen sensitivity isn’t really calibrated for this.
– I couldn’t find an app I liked. The Garmin displays only the things I care about in a super minimal layout. It’s great. I tried the Strava app but I think it doesn’t do turn by turn navigation. And it doesn’t show speed and distance when the map is showing (my old Edge 800 didn’t either, but I think some newer GPS bike computers can). Maybe there is an app that does a good job with this? I didn’t immediately find one after a little searching and I don’t feel like spending a ton of time installing different apps to find a good one.

What Will I Do Now?

I haven’t bought it yet but the Garmin Edge 1040 is my current top pick. The Hammerhead Karoo 2 is also an option but the fact that it runs Android is actually kind of a turn off for me. It seems like too big of an OS. And I think the battery life is worse, which doesn’t matter now but could matter 10 years from now when the battery has degraded.

I don’t ride much in the winter and it looks like Garmin introduces a new version roughly every two years, which might be next spring. So I’ll wait until spring and buy an Edge 1040 or possibly a newer model then.

Posted in All, Cycling | 6 Comments

Mastodon and ActivityPub

I’ve started using Mastodon. It’s an open alternative to Twitter. “Open” meaning there’s no central controlling entity. Tech-savvy people can run their own server and servers can talk to each other (referred to as federation). If you’re interested in joining you can find a server to join, sign up, and follow me: @mark@kingant.net.

Why???

Part of me has always disliked posting on Facebook and Twitter. They’re walled gardens. I’d rather self host. I like keeping a record of my posts. Twitter allows downloading an archive of your posts and Facebook might, too, but I’d rather control the system of record myself. Also I don’t love being used by a company so they can make money from ads. Wise man once say, “when you’re not paying for the product you are the product.”

And then Elon Musk being a jerk1 drove me to make a change.

The open/federated nature is appealing to me. It’s similar to how email works. And XMPP.

ActivityPub

ActivityPub is the important thing. There are other ActivityPub services that aren’t Mastodon. For example Pixelfed (an Instagram clone) and PeerTube (which is tailored for video sharing). It’s possible for them to be written such that they can interact with each other. For example on Mastodon I can follow a Pixelfed user. It’s great.

And actually I think it’s weird that these services are treated as separate things. Mastodon is just an ActivityPub service that looks like Twitter and Pixelfed is just an ActivityPub service that looks like Instagram. I’m not even sure the functionality is very different. It makes me wonder if they could be written as different UIs atop a common server. But whatever, I’m not super familiar with it. Maybe there are good reasons for them to be separate.

Running My Own Server

And because I’m a huge nerd I’m running my own server. I did it partially as a learning exercise. But also now my ID can be my email address (“@mark@kingant.net“) instead of something like “@markdoliner@someone-elses-server.example.com”

It took some work. The developers haven’t prioritized ease of installation (which is perfectly reasonable). It runs as two processes: The web/API service and a background job queue worker. It also requires PostgreSQL and Redis. As mentioned previously, I use Salt to manage my cloud servers. Initially I tried running everything directly on an EC2 instance, but Mastodon is written in Ruby and installing and managing Ruby and building Mastodon via Salt rules is a lot of work.

There are official Docker images so I looked into that instead. The docker-compose.yml file was immensely helpful. At first I tried running containers in Amazon Elastic Container Service (ECS). But configuring it is way more elaborate than you would imagine. And I’m not sure but it might have required paying for separate EC2 instances per container, though maybe it’s possible to avoid that if you go a step further and use Amazon Elastic Kubernetes Service (EKS). But of course that’s even more effort to configure.

What I did instead is run the containers myself on a single EC2 instance. Three containers, specifically: web/API, background job queue worker, and Redis. I’m running PostgreSQL directly on the host because it’s super easy. Obviously I could have used RDS for PostgreSQL and that’s certainly a nice managed option, but it’s also more expensive. I wanted to run Redis on the host but it’s hard to configure it to allow access from the Mastodon containers but disallow access from outside networks. Though even with PostgreSQL I ended up configuring it to accept connections from the world (disallowed via AWS security group of course, and PostgreSQL is configured such that users can only authenticate from localhost or the Docker network). So this feels like a decent compromise. The maintenance overhead is manageable and it’s fairly cheap. I’m paying around $3 to $4 per month ($83 up front for a t4g.micro 3 year reservation plus $1 to $2 per month for EBS storage).

I don’t want to make my Salt config repo public but here are the states for the Docker containers in case it helps anyone:

include:
  - mastodon/docker
  - mastodon/user

# A Docker network for containers to access the world.
mastodon_external:
  docker_network.present:
    - require:
      - sls: mastodon/docker

# A Docker network for intra-container communication.
mastodon_internal:
  docker_network.present:
    - internal: True
    - require:
      - sls: mastodon/docker

# Create directory for Mastodon. We leave ownership as the default
# (root) because the mastodon user shouldn't need to create files here
# and using root is a little more restrictive. It prevents the Mastodon
# processes (running in Docker as the mastodon user) from causing
# mischief (e.g. modifying the static web files, for example).
/srv/mastodon.kingant.net:
  file.directory: []

# Create private directory for Mastodon Redis data.
/srv/mastodon.kingant.net/redis:
  file.directory:
    - user: mastodon
    - group: mastodon
    - mode: 700
    - require:
      - sls: mastodon/user

# Redis Docker container.
# We're running it as a Docker container rather than on the host mostly
# because we would need to change the host Redis's config to bind to the
# Docker network IP (by default it only binds to localhost) and that
# requires modifying the config file from the package (there is no
# conf.d directory). That means we would stop getting automatic config
# file updates on new package versions, which is unfortunate.
# Of course if we ever want to change any other config setting then
# we'll have the same problem with the Docker contain. Though it's maybe
# still slightly cleaner having Redis accessible only over the
# mastodon_internal network, because Redis isn't big on data isolation.
# The options are:
# 1. Use "namespaces." Meaning "prefix your keys with a namespace string
#    of your choice, maybe with a trailing  colon."
# 2. Use a single server but have different apps use different DBs
#    within Redis. This is a thing... but seems problematic because
#    the DBs are numbered sequentially so what happens if you remove one
#    in the middle?
# 3. Use separate servers. This probably makes the most sense (and
#    provides the strongest isolation).
redis:
  docker_container.running:
    - image: redis:7-alpine
    - binds:
      - /srv/mastodon.kingant.net/redis:/data:rw
    # Having a healthcheck isn't important for Redis but it existed in
    # Mastodon's example docker-compose file so I included it here. The
    # times are in nanoseconds (currently 5 seconds).
    - healthcheck: {test: ["CMD", "redis-cli", "ping"], retries=10}
    - networks:
      - mastodon_internal
    - restart_policy: always
    # The UID and GID are hardcoded here and in user.present in
    # mastodon/user.sls because it's really hard to look them up. See
    # https://github.com/saltstack/salt/issues/63287#issuecomment-1377500491
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/redis

# Create directory for Mastodon config file.
/srv/mastodon.kingant.net/config:
  file.directory: []

# Install the Mastodon config file. It's just a bunch of environment
# variables that get mounted in Docker containers and then sourced.
# Note: Secrets must be added to this file by hand. See the "solution
# for storing secrets" comment in TODO.md.
/srv/mastodon.kingant.net/config/env_vars:
  file.managed:
    - source: salt://mastodon/files/srv/mastodon.kingant.net/config/env_vars
    - user: mastodon
    - group: mastodon
    - mode: 600
    # Do not modify the file if it already exists. This allows Salt to
    # create the initial version of the file while being careful not to
    # overwrite it once the secrets have been added.
    - replace: False
    - require:
      - file: /srv/mastodon.kingant.net/config

# Docker container for Mastodon web.
mastodon-web:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && rm -f /mastodon/tmp/pids/server.pid && bundle exec rails s -p 3000"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
      - /srv/mastodon.kingant.net/www/system:/opt/mastodon/public/system:rw
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "wget -q --spider --proxy=off http://localhost:3000/health || exit 1"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - port_bindings:
      - 3000:3000
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

# Docker container for Mastodon streaming.
mastodon-streaming:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && node ./streaming"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "wget -q --spider --proxy=off http://localhost:4000/api/v1/streaming/health || exit 1"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - port_bindings:
      - 4000:4000
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

# Docker container for Mastodon sidekiq.
mastodon-sidekiq:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && bundle exec sidekiq"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
      - /srv/mastodon.kingant.net/www/system:/opt/mastodon/public/system:rw
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "ps aux | grep '[s]idekiq\ 6' || false"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

Future Thoughts

I’d love to see more providers offering ActivityPub-as-a-service. There are some already, but they’re a bit pricey. And it feels like an immature product space. I could see companies wanting to use their own domain for their support and marketing accounts. e.g. “@support@delta.com” instead of something like “@deltasupport@mastodon.social”

I’d love to see a lighter-weight alternative to Mastodon that is targeted toward small installations like me. Like removing Redis as a requirement. Maybe getting rid of on-disk storage of files and just using the DB. Doing a better job of automatically cleaning up old, cached meda. Maybe this is Pleroma. Or maybe one of the other options.

Footnotes

  1. Ohhhh I think I don’t want to try to gather references. The Twitter Under Elon Musk Wikipedia page lists some things. This possibly-paywalled Vanity Fair article covers a few pre-Twitter things. There’s this exchange on Twitter where a guy was trying to figure out if he had been fired, and he sounds like a pretty good guy! And Elon was super rude: Thread one, thread two, and thread three. Labeling NPR as “state-affiliated media,” the same term used for propaganda accounts in Russia and China.
Posted in All, Computers | Leave a comment

Let’s Encrypt is Fantastic

Apparently I haven’t posted this anywhere so I just want to go on record: Let’s Encrypt is amazing and fantastic. In five years the percentage of web pages loaded by Firefox using HTTPS went from 30% to 80%. That’s huge! A really impressive accomplishment1. And ACME, the protocol for validating domain ownership, is also great. A massive improvement over the old process of getting certificates. And all those certificates are only valid for 90 days, instead of the old standard of a year or more! So good.

A Quibble

I don’t like how Certbot is configured. To keep things in perspective let me say that this is just a minor usability complaint. It doesn’t stop Let’s Encrypt from being fantastic.

Certbot is a tool for obtaining certificates from Let’s Encrypt for https and other services. The configuration file is written automatically when the certbot command is run. Users are discouraged2 from modifying the configuration file directly. I can’t recall any other tool with this behavior. I don’t get it. Like, why? Sysadmins are used to writing config files, why is it important for this to be different? Maybe it’s supposed to be easier for users? But is it?

I guess usually this behavior is fine and not a problem. I find it inconvenient because I manage my two cloud servers using Salt (a configuration-as-code tool). It’s easy to place a config file on the server and apply changes to it. But since that’s discouraged I settled on running the certbot command but only if the config file does not exist. This is what I did, in case it helps anyone:

{% macro configure_certbot(primary_domain, additional_domains = none, require_file = none) %}
{#
  @param require_file We use Certbot's "webroot" authentication plugin.
         Certbot puts a file in our web server's root directory. This
         means we need a working web server. This parameter should be a
         Salt state that ensures Nginx is up and running for this
         domain.
#}
{# Don't attempt this in VirtualBox because it will always fail. #}
{% if grains['virtual'] != 'VirtualBox' %}
run-certbot-{{ primary_domain }}:
  cmd.run:
    - name: "certbot certonly
        --non-interactive
        --quiet
        --agree-tos
        --email mark@kingant.net
        --no-eff-email
        --webroot
        --webroot-path /srv/{{ primary_domain }}/www
        --domains {{ primary_domain }}{{ ',' if additional_domains else '' }}{{ additional_domains|join(',') if additional_domains else '' }}
        --deploy-hook 'install --owner=root     --group=root     --mode=444 --preserve-timestamps ${RENEWED_LINEAGE}/fullchain.pem /etc/nginx/{{ primary_domain }}_cert.pem &&
                       install --owner=www-data --group=www-data --mode=400 --preserve-timestamps ${RENEWED_LINEAGE}/privkey.pem   /etc/nginx/{{ primary_domain }}_cert.key &&
                       service nginx reload'"
    # Certbot saves the above arguments into a conf file and
    # automatically renews the certificate as needed so we only need to
    # run this command once. This would need to be done differently if
    # we ever want to change the renewal config via Salt. The Certbot
    # docs[1] describe using --force-renewal to force the renewal conf
    # file to be updated. So we could figure out a way to do that once
    # via Salt. Or we could manage the renewal conf file directly via
    # Salt.
    #
    # [1] https://eff-certbot.readthedocs.io/en/stable/using.html#modifying-the-renewal-configuration-of-existing-certificates
    - unless:
      - fun: file.file_exists
        path: /etc/letsencrypt/renewal/{{ primary_domain }}.conf
    - require:
      - pkg: certbot
      - file: {{ require_file }}
{% endif %}
{% endmacro %}

{{ configure_certbot('kingant.net', additional_domains=['www.kingant.net'], require_file='/etc/nginx/sites-enabled/kingant.net') }}

It’s a bit messy and I don’t know what I’ll do when I need to change the config. If you want certbot to update the config file you have to use --force-renewal, but I certainly wouldn’t want to do that every time I apply my configuration state to my servers. I think I’ll have to either run certbot --force-renewal by hand (fine, but loses the benefits of configuration-as-code), or have Salt manage the config file (discouraged by the official docs). Either option is fine, it just feels like a dumb problem to have.

I’m not the only one who has been inconvenienced by this. A quick search turned up this question thread and this feature request ticket.

Anyway, but Let’s Encrypt really is fantastic! This one usability complaint for my atypical usage pattern is super minor.

Footnotes

  1. Yeah sure, Let’s Encrypt isn’t solely responsible. There had been a push to encrypt more sites post-Snowden anyway (e.g. Cloudflare in 2014). And there’s no way to know how big of an impact Let’s Encrypt had. Buuuuut, I bet it was huge. And yeah, it wasn’t Let’s Encrypt by themselves, hosts like Squarespace, DigitalOcean, and WP Engine have also done their part.
  2. “it is also possible to manually modify the certificate’s renewal configuration file, but this is discouraged since it can easily break Certbot’s ability to renew your certificates.”
Posted in All, Computers | Leave a comment

Using Salt to Manage Server Configuration

Background

I’ve been using Salt (for clarity and searchability it’s also sometimes referred to as Salt Project or Salt Stack) to manage the configuration of my web server since 2014. It’s similar to Puppet, Chef (I guess they call it “Progress Chef” now), and Ansible.

At Meebo we used Puppet to manage our server config. This was like maybe 2008 through 2012. It was ok. I don’t remember the specifics but I felt like it could have been better. I don’t remember if there were fundamental problems or if I just felt that it was hard to use.

Anyway, when we chose the configuration management tech at Honor in 2014 we looked for a better option. We made a list of the leading contenders. It was a toss up between Salt and Ansible. They both seemed great. I don’t remember why we chose Salt. Maybe it seemed a little easier to use?

I Like It

I started using it for my personal web server around the same time. I’ve been happy with it. The main benefit is that it’s easier to update to a newer version of Ubuntu LTS every 2 or 4 years. My process is basically:

  1. Use Vagrant to run the new Ubuntu version locally. Tweak the config as needed to get things working (update package names, change service files from SysV init to systemd, etc.), and test.
  2. Provision a new EC2 instance with the new Ubuntu version. Apply the Salt states to the new server.
  3. Migrate data from the old server to the new server and update the Elastic IP to point to the new server.
  4. Verify that everything is good then terminate the old server.

It’s an upfront time investment to write the Salt rules, but it makes the above migration fairly easy. I run it in a “masterless” configuration, so I’m not actually running the Salt server. Rather, I have my server-config repo checked out on my EC2 instance.

Weaknesses

Salt does have weaknesses. Since the beginning I’ve felt that their documentation could be more clear. It’s hard for me to be objective since I’ve been using it for so long, but here are a few minor gripes:

  • Some of the terminology is unclear. For example, I think the things in this list are typically referred as “states,” but the top of that page calls them “state modules” even though there is a different set of things that are called modules. Additionally the rules that I write to dictate how my server should be configured are also referred to as states, I think? And it’s not clear what modules are or when or how you would use them. There are often modules with the same name as a state but with different functions.
  • This page about Salt Formulas has a ton of advice about good conventions for writing Salt states. That’s great, but why is it on this page? Shouldn’t it be in the “Using Salt” section of the documentation instead of here, in the documentation intended for people developing Salt itself?
  • Navigating the Salt documentation is hard. Additionally there’s at least one item that’s cut off at the bottom of the right hand nav menu on this page. The last item I see is “Windows” but I know there is at least a “Developing Salt” section. Possibly also Release Notes, Venafi Tools for Salt, and Glossary.
  • The term “high state” feels unnatural to me. I think it has some meaning and if you understand how Salt works then maybe there’s a moment of clarity where the pieces fit together. But mostly it feels jargony.
  • It’s hard to have a state key off the result of an arbitrary command that runs mid-way through applying your configuration. There’s a thing called “Slots” that tries to solve this problem but it’s hard to use.
  • Also, speaking of Slots, why is it called “Slots”? And why is the syntax so awkward? Also I found the documentation hard to read. Partially because it feels jargony. Also there’s some awkward repetition (“Slots extend the state syntax and allows you to do things right before the state function is executed. So you can make a decision in the last moment right before a state is executed.”) and clunky grammar (“There are some specifics in the syntax coming from the execution functions nature and a desire to simplify the user experience. First one is that you don’t need to quote the strings passed to the slots functions. The second one is that all arguments handled as strings.”).

Summary

So I’m happy with it and intend to keep using it. I suspect other options have their own flaws. I have a vague impression that maybe Ansible is a little more widely-used, which is useful to consider.

Also the modern approach to running your own applications in the cloud is:

  • Build Docker containers for your applications.
  • Either deploy your containers directly to a cloud provider like Amazon Elastic Container Service, or deploy them using Kubernetes.

So there’s less of a need to do configuration management like this on hosts. But it’s probably still valuable for anything you don’t want to run in a container (mail server? DB server? VPN server?).

Posted in Computers | Leave a comment

Firefox as a Snap package in Ubuntu 22.04

In Ubuntu 22.04 Firefox is installed from a Snap package and not a dpkg like most other things and the update experience is awful. I get this notification:

Screenshot of a notification that says "Pending update of 'firefox' snap. Close the app to avoid disruptions (11 days left)"

However, closing Firefox and reopening it does not cause it to be updated. Apparently you have to either close it and leave it closed for a few hours until the Snap service’s next periodic update, which maybe happens every 6 hours, or close it and run a command to cause the Snap to update.

This is a terrible user experience. Nothing in the messaging informs me about the above. Also it’s absolutely unreasonable to expect the user to leave their browser closed for 3 hours (on average) until an update happens, and expecting a user to run a command manually is a poor experience. See this Stack Exchange post to see other people who were confused and annoyed by this behavior. Lots of people pointed it out in the comments on this post, too. Auto-updates should just happen and all the user should need to do is restart the application (assuming the application isn’t able to dynamically reload the affected files).

I also don’t understand why the Snap isn’t updated while the application is running. Linux is generally able to modify files that are in-use. It’s something that I see as a huge advantage that Linux has over Windows (for more discussion see this Stack Exchange question, answers, and comments). It’s plausible that some applications could misbehave if their files are changed while running—maybe Firefox suffers from this? But then I wonder what the update experience would be like if the user isn’t the administrator. Does the app get killed at some point so the update can happen?

Posted in Computers | Leave a comment

NASA Deep Space Network (DSN)

A random thing I stumbled upon and thought was interesting: NASA runs a website where you can see the current send/receive status of each antenna in the DSN (Deep Space Network) as well as the spacecraft it’s communicating with.

The DSN is a network of antennae at three locations around the world (California, Spain, Australia) that send and receive messages from various satellites and other spacecraft. They’re roughly 120 degrees apart from each other to give complete coverage. The Wikipedia page has a ton more info.

Posted in All | Leave a comment