Mastodon and ActivityPub

I’ve started using Mastodon. It’s an open alternative to Twitter. “Open” meaning there’s no central controlling entity. Tech-savvy people can run their own server and servers can talk to each other (referred to as federation). If you’re interested in joining you can find a server to join, sign up, and follow me: @mark@kingant.net.

Why???

Part of me has always disliked posting on Facebook and Twitter. They’re walled gardens. I’d rather self host. I like keeping a record of my posts. Twitter allows downloading an archive of your posts and Facebook might, too, but I’d rather control the system of record myself. Also I don’t love being used by a company so they can make money from ads. Wise man once say, “when you’re not paying for the product you are the product.”

And then Elon Musk being a jerk1 drove me to make a change.

The open/federated nature is appealing to me. It’s similar to how email works. And XMPP.

ActivityPub

ActivityPub is the important thing. There are other ActivityPub services that aren’t Mastodon. For example Pixelfed (an Instagram clone) and PeerTube (which is tailored for video sharing). It’s possible for them to be written such that they can interact with each other. For example on Mastodon I can follow a Pixelfed user. It’s great.

And actually I think it’s weird that these services are treated as separate things. Mastodon is just an ActivityPub service that looks like Twitter and Pixelfed is just an ActivityPub service that looks like Instagram. I’m not even sure the functionality is very different. It makes me wonder if they could be written as different UIs atop a common server. But whatever, I’m not super familiar with it. Maybe there are good reasons for them to be separate.

Running My Own Server

And because I’m a huge nerd I’m running my own server. I did it partially as a learning exercise. But also now my ID can be my email address (“@mark@kingant.net“) instead of something like “@markdoliner@someone-elses-server.example.com”

It took some work. The developers haven’t prioritized ease of installation (which is perfectly reasonable). It runs as two processes: The web/API service and a background job queue worker. It also requires PostgreSQL and Redis. As mentioned previously, I use Salt to manage my cloud servers. Initially I tried running everything directly on an EC2 instance, but Mastodon is written in Ruby and installing and managing Ruby and building Mastodon via Salt rules is a lot of work.

There are official Docker images so I looked into that instead. The docker-compose.yml file was immensely helpful. At first I tried running containers in Amazon Elastic Container Service (ECS). But configuring it is way more elaborate than you would imagine. And I’m not sure but it might have required paying for separate EC2 instances per container, though maybe it’s possible to avoid that if you go a step further and use Amazon Elastic Kubernetes Service (EKS). But of course that’s even more effort to configure.

What I did instead is run the containers myself on a single EC2 instance. Three containers, specifically: web/API, background job queue worker, and Redis. I’m running PostgreSQL directly on the host because it’s super easy. Obviously I could have used RDS for PostgreSQL and that’s certainly a nice managed option, but it’s also more expensive. I wanted to run Redis on the host but it’s hard to configure it to allow access from the Mastodon containers but disallow access from outside networks. Though even with PostgreSQL I ended up configuring it to accept connections from the world (disallowed via AWS security group of course, and PostgreSQL is configured such that users can only authenticate from localhost or the Docker network). So this feels like a decent compromise. The maintenance overhead is manageable and it’s fairly cheap. I’m paying around $3 to $4 per month ($83 up front for a t4g.micro 3 year reservation plus $1 to $2 per month for EBS storage).

I don’t want to make my Salt config repo public but here are the states for the Docker containers in case it helps anyone:

include:
  - mastodon/docker
  - mastodon/user

# A Docker network for containers to access the world.
mastodon_external:
  docker_network.present:
    - require:
      - sls: mastodon/docker

# A Docker network for intra-container communication.
mastodon_internal:
  docker_network.present:
    - internal: True
    - require:
      - sls: mastodon/docker

# Create directory for Mastodon. We leave ownership as the default
# (root) because the mastodon user shouldn't need to create files here
# and using root is a little more restrictive. It prevents the Mastodon
# processes (running in Docker as the mastodon user) from causing
# mischief (e.g. modifying the static web files, for example).
/srv/mastodon.kingant.net:
  file.directory: []

# Create private directory for Mastodon Redis data.
/srv/mastodon.kingant.net/redis:
  file.directory:
    - user: mastodon
    - group: mastodon
    - mode: 700
    - require:
      - sls: mastodon/user

# Redis Docker container.
# We're running it as a Docker container rather than on the host mostly
# because we would need to change the host Redis's config to bind to the
# Docker network IP (by default it only binds to localhost) and that
# requires modifying the config file from the package (there is no
# conf.d directory). That means we would stop getting automatic config
# file updates on new package versions, which is unfortunate.
# Of course if we ever want to change any other config setting then
# we'll have the same problem with the Docker contain. Though it's maybe
# still slightly cleaner having Redis accessible only over the
# mastodon_internal network, because Redis isn't big on data isolation.
# The options are:
# 1. Use "namespaces." Meaning "prefix your keys with a namespace string
#    of your choice, maybe with a trailing  colon."
# 2. Use a single server but have different apps use different DBs
#    within Redis. This is a thing... but seems problematic because
#    the DBs are numbered sequentially so what happens if you remove one
#    in the middle?
# 3. Use separate servers. This probably makes the most sense (and
#    provides the strongest isolation).
redis:
  docker_container.running:
    - image: redis:7-alpine
    - binds:
      - /srv/mastodon.kingant.net/redis:/data:rw
    # Having a healthcheck isn't important for Redis but it existed in
    # Mastodon's example docker-compose file so I included it here. The
    # times are in nanoseconds (currently 5 seconds).
    - healthcheck: {test: ["CMD", "redis-cli", "ping"], retries=10}
    - networks:
      - mastodon_internal
    - restart_policy: always
    # The UID and GID are hardcoded here and in user.present in
    # mastodon/user.sls because it's really hard to look them up. See
    # https://github.com/saltstack/salt/issues/63287#issuecomment-1377500491
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/redis

# Create directory for Mastodon config file.
/srv/mastodon.kingant.net/config:
  file.directory: []

# Install the Mastodon config file. It's just a bunch of environment
# variables that get mounted in Docker containers and then sourced.
# Note: Secrets must be added to this file by hand. See the "solution
# for storing secrets" comment in TODO.md.
/srv/mastodon.kingant.net/config/env_vars:
  file.managed:
    - source: salt://mastodon/files/srv/mastodon.kingant.net/config/env_vars
    - user: mastodon
    - group: mastodon
    - mode: 600
    # Do not modify the file if it already exists. This allows Salt to
    # create the initial version of the file while being careful not to
    # overwrite it once the secrets have been added.
    - replace: False
    - require:
      - file: /srv/mastodon.kingant.net/config

# Docker container for Mastodon web.
mastodon-web:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && rm -f /mastodon/tmp/pids/server.pid && bundle exec rails s -p 3000"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
      - /srv/mastodon.kingant.net/www/system:/opt/mastodon/public/system:rw
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "wget -q --spider --proxy=off http://localhost:3000/health || exit 1"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - port_bindings:
      - 3000:3000
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

# Docker container for Mastodon streaming.
mastodon-streaming:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && node ./streaming"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "wget -q --spider --proxy=off http://localhost:4000/api/v1/streaming/health || exit 1"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - port_bindings:
      - 4000:4000
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

# Docker container for Mastodon sidekiq.
mastodon-sidekiq:
  docker_container.running:
    - image: ghcr.io/mastodon/mastodon:v4.1
    - command: bash -c "set -o allexport && source /etc/opt/mastodon/env_vars && set +o allexport && bundle exec sidekiq"
    - binds:
      - /srv/mastodon.kingant.net/config:/etc/opt/mastodon:ro
      - /srv/mastodon.kingant.net/www/system:/opt/mastodon/public/system:rw
    - extra_hosts:
      - host.docker.internal:host-gateway
    - healthcheck: {test: ["CMD-SHELL", "ps aux | grep '[s]idekiq\ 6' || false"], retries=10}
    - networks:
      - mastodon_external
      - mastodon_internal
    - restart_policy: always
    - skip_translate: extra_hosts # Because Salt was complaining that "host-gateway" wasn't a valid IP.
    - user: 991:991
    - require:
      - sls: mastodon/docker
      - sls: mastodon/user
      - docker_network: mastodon_external
      - docker_network: mastodon_internal
      - file: /srv/mastodon.kingant.net/config/env_vars
    - watch:
      - file: /srv/mastodon.kingant.net/config/env_vars

Future Thoughts

I’d love to see more providers offering ActivityPub-as-a-service. There are some already, but they’re a bit pricey. And it feels like an immature product space. I could see companies wanting to use their own domain for their support and marketing accounts. e.g. “@support@delta.com” instead of something like “@deltasupport@mastodon.social”

I’d love to see a lighter-weight alternative to Mastodon that is targeted toward small installations like me. Like removing Redis as a requirement. Maybe getting rid of on-disk storage of files and just using the DB. Doing a better job of automatically cleaning up old, cached meda. Maybe this is Pleroma. Or maybe one of the other options.

Footnotes

  1. Ohhhh I think I don’t want to try to gather references. The Twitter Under Elon Musk Wikipedia page lists some things. This possibly-paywalled Vanity Fair article covers a few pre-Twitter things. There’s this exchange on Twitter where a guy was trying to figure out if he had been fired, and he sounds like a pretty good guy! And Elon was super rude: Thread one, thread two, and thread three. Labeling NPR as “state-affiliated media,” the same term used for propaganda accounts in Russia and China.
Posted in All, Computers | Leave a comment

Let’s Encrypt is Fantastic

Apparently I haven’t posted this anywhere so I just want to go on record: Let’s Encrypt is amazing and fantastic. In five years the percentage of web pages loaded by Firefox using HTTPS went from 30% to 80%. That’s huge! A really impressive accomplishment1. And ACME, the protocol for validating domain ownership, is also great. A massive improvement over the old process of getting certificates. And all those certificates are only valid for 90 days, instead of the old standard of a year or more! So good.

A Quibble

I don’t like how Certbot is configured. To keep things in perspective let me say that this is just a minor usability complaint. It doesn’t stop Let’s Encrypt from being fantastic.

Certbot is a tool for obtaining certificates from Let’s Encrypt for https and other services. The configuration file is written automatically when the certbot command is run. Users are discouraged2 from modifying the configuration file directly. I can’t recall any other tool with this behavior. I don’t get it. Like, why? Sysadmins are used to writing config files, why is it important for this to be different? Maybe it’s supposed to be easier for users? But is it?

I guess usually this behavior is fine and not a problem. I find it inconvenient because I manage my two cloud servers using Salt (a configuration-as-code tool). It’s easy to place a config file on the server and apply changes to it. But since that’s discouraged I settled on running the certbot command but only if the config file does not exist. This is what I did, in case it helps anyone:

{% macro configure_certbot(primary_domain, additional_domains = none, require_file = none) %}
{#
  @param require_file We use Certbot's "webroot" authentication plugin.
         Certbot puts a file in our web server's root directory. This
         means we need a working web server. This parameter should be a
         Salt state that ensures Nginx is up and running for this
         domain.
#}
{# Don't attempt this in VirtualBox because it will always fail. #}
{% if grains['virtual'] != 'VirtualBox' %}
run-certbot-{{ primary_domain }}:
  cmd.run:
    - name: "certbot certonly
        --non-interactive
        --quiet
        --agree-tos
        --email mark@kingant.net
        --no-eff-email
        --webroot
        --webroot-path /srv/{{ primary_domain }}/www
        --domains {{ primary_domain }}{{ ',' if additional_domains else '' }}{{ additional_domains|join(',') if additional_domains else '' }}
        --deploy-hook 'install --owner=root     --group=root     --mode=444 --preserve-timestamps ${RENEWED_LINEAGE}/fullchain.pem /etc/nginx/{{ primary_domain }}_cert.pem &&
                       install --owner=www-data --group=www-data --mode=400 --preserve-timestamps ${RENEWED_LINEAGE}/privkey.pem   /etc/nginx/{{ primary_domain }}_cert.key &&
                       service nginx reload'"
    # Certbot saves the above arguments into a conf file and
    # automatically renews the certificate as needed so we only need to
    # run this command once. This would need to be done differently if
    # we ever want to change the renewal config via Salt. The Certbot
    # docs[1] describe using --force-renewal to force the renewal conf
    # file to be updated. So we could figure out a way to do that once
    # via Salt. Or we could manage the renewal conf file directly via
    # Salt.
    #
    # [1] https://eff-certbot.readthedocs.io/en/stable/using.html#modifying-the-renewal-configuration-of-existing-certificates
    - unless:
      - fun: file.file_exists
        path: /etc/letsencrypt/renewal/{{ primary_domain }}.conf
    - require:
      - pkg: certbot
      - file: {{ require_file }}
{% endif %}
{% endmacro %}

{{ configure_certbot('kingant.net', additional_domains=['www.kingant.net'], require_file='/etc/nginx/sites-enabled/kingant.net') }}

It’s a bit messy and I don’t know what I’ll do when I need to change the config. If you want certbot to update the config file you have to use --force-renewal, but I certainly wouldn’t want to do that every time I apply my configuration state to my servers. I think I’ll have to either run certbot --force-renewal by hand (fine, but loses the benefits of configuration-as-code), or have Salt manage the config file (discouraged by the official docs). Either option is fine, it just feels like a dumb problem to have.

I’m not the only one who has been inconvenienced by this. A quick search turned up this question thread and this feature request ticket.

Anyway, but Let’s Encrypt really is fantastic! This one usability complaint for my atypical usage pattern is super minor.

Footnotes

  1. Yeah sure, Let’s Encrypt isn’t solely responsible. There had been a push to encrypt more sites post-Snowden anyway (e.g. Cloudflare in 2014). And there’s no way to know how big of an impact Let’s Encrypt had. Buuuuut, I bet it was huge. And yeah, it wasn’t Let’s Encrypt by themselves, hosts like Squarespace, DigitalOcean, and WP Engine have also done their part.
  2. “it is also possible to manually modify the certificate’s renewal configuration file, but this is discouraged since it can easily break Certbot’s ability to renew your certificates.”
Posted in All, Computers | Leave a comment

Using Salt to Manage Server Configuration

Background

I’ve been using Salt (for clarity and searchability it’s also sometimes referred to as Salt Project or Salt Stack) to manage the configuration of my web server since 2014. It’s similar to Puppet, Chef (I guess they call it “Progress Chef” now), and Ansible.

At Meebo we used Puppet to manage our server config. This was like maybe 2008 through 2012. It was ok. I don’t remember the specifics but I felt like it could have been better. I don’t remember if there were fundamental problems or if I just felt that it was hard to use.

Anyway, when we chose the configuration management tech at Honor in 2014 we looked for a better option. We made a list of the leading contenders. It was a toss up between Salt and Ansible. They both seemed great. I don’t remember why we chose Salt. Maybe it seemed a little easier to use?

I Like It

I started using it for my personal web server around the same time. I’ve been happy with it. The main benefit is that it’s easier to update to a newer version of Ubuntu LTS every 2 or 4 years. My process is basically:

  1. Use Vagrant to run the new Ubuntu version locally. Tweak the config as needed to get things working (update package names, change service files from SysV init to systemd, etc.), and test.
  2. Provision a new EC2 instance with the new Ubuntu version. Apply the Salt states to the new server.
  3. Migrate data from the old server to the new server and update the Elastic IP to point to the new server.
  4. Verify that everything is good then terminate the old server.

It’s an upfront time investment to write the Salt rules, but it makes the above migration fairly easy. I run it in a “masterless” configuration, so I’m not actually running the Salt server. Rather, I have my server-config repo checked out on my EC2 instance.

Weaknesses

Salt does have weaknesses. Since the beginning I’ve felt that their documentation could be more clear. It’s hard for me to be objective since I’ve been using it for so long, but here are a few minor gripes:

  • Some of the terminology is unclear. For example, I think the things in this list are typically referred as “states,” but the top of that page calls them “state modules” even though there is a different set of things that are called modules. Additionally the rules that I write to dictate how my server should be configured are also referred to as states, I think? And it’s not clear what modules are or when or how you would use them. There are often modules with the same name as a state but with different functions.
  • This page about Salt Formulas has a ton of advice about good conventions for writing Salt states. That’s great, but why is it on this page? Shouldn’t it be in the “Using Salt” section of the documentation instead of here, in the documentation intended for people developing Salt itself?
  • Navigating the Salt documentation is hard. Additionally there’s at least one item that’s cut off at the bottom of the right hand nav menu on this page. The last item I see is “Windows” but I know there is at least a “Developing Salt” section. Possibly also Release Notes, Venafi Tools for Salt, and Glossary.
  • The term “high state” feels unnatural to me. I think it has some meaning and if you understand how Salt works then maybe there’s a moment of clarity where the pieces fit together. But mostly it feels jargony.
  • It’s hard to have a state key off the result of an arbitrary command that runs mid-way through applying your configuration. There’s a thing called “Slots” that tries to solve this problem but it’s hard to use.
  • Also, speaking of Slots, why is it called “Slots”? And why is the syntax so awkward? Also I found the documentation hard to read. Partially because it feels jargony. Also there’s some awkward repetition (“Slots extend the state syntax and allows you to do things right before the state function is executed. So you can make a decision in the last moment right before a state is executed.”) and clunky grammar (“There are some specifics in the syntax coming from the execution functions nature and a desire to simplify the user experience. First one is that you don’t need to quote the strings passed to the slots functions. The second one is that all arguments handled as strings.”).

Summary

So I’m happy with it and intend to keep using it. I suspect other options have their own flaws. I have a vague impression that maybe Ansible is a little more widely-used, which is useful to consider.

Also the modern approach to running your own applications in the cloud is:

  • Build Docker containers for your applications.
  • Either deploy your containers directly to a cloud provider like Amazon Elastic Container Service, or deploy them using Kubernetes.

So there’s less of a need to do configuration management like this on hosts. But it’s probably still valuable for anything you don’t want to run in a container (mail server? DB server? VPN server?).

Posted in Computers | Leave a comment

Firefox as a Snap package in Ubuntu 22.04

In Ubuntu 22.04 Firefox is installed from a Snap package and not a dpkg like most other things and the update experience is awful. I get this notification:

Screenshot of a notification that says "Pending update of 'firefox' snap. Close the app to avoid disruptions (11 days left)"

However, closing Firefox and reopening it does not cause it to be updated. Apparently you have to either close it and leave it closed for a few hours until the Snap service’s next periodic update, which maybe happens every 6 hours, or close it and run a command to cause the Snap to update.

This is a terrible user experience. Nothing in the messaging informs me about the above. Also it’s absolutely unreasonable to expect the user to leave their browser closed for 3 hours (on average) until an update happens, and expecting a user to run a command manually is a poor experience. See this Stack Exchange post to see other people who were confused and annoyed by this behavior. Lots of people pointed it out in the comments on this post, too. Auto-updates should just happen and all the user should need to do is restart the application (assuming the application isn’t able to dynamically reload the affected files).

I also don’t understand why the Snap isn’t updated while the application is running. Linux is generally able to modify files that are in-use. It’s something that I see as a huge advantage that Linux has over Windows (for more discussion see this Stack Exchange question, answers, and comments). It’s plausible that some applications could misbehave if their files are changed while running—maybe Firefox suffers from this? But then I wonder what the update experience would be like if the user isn’t the administrator. Does the app get killed at some point so the update can happen?

Posted in Computers | Leave a comment

NASA Deep Space Network (DSN)

A random thing I stumbled upon and thought was interesting: NASA runs a website where you can see the current send/receive status of each antenna in the DSN (Deep Space Network) as well as the spacecraft it’s communicating with.

The DSN is a network of antennae at three locations around the world (California, Spain, Australia) that send and receive messages from various satellites and other spacecraft. They’re roughly 120 degrees apart from each other to give complete coverage. The Wikipedia page has a ton more info.

Posted in All | Leave a comment

SlowCOVIDNC app

Update 2022-08-13: A few months ago the app started crashing and kept crashing so I uninstalled it. Looking at the comments on Play Store, other people have had the same problem. That was on a three and a half year old phone (a Google Pixel 2). I have since uploaded to a newer phone so maybe the app would work now, but I haven’t tried. One thought I had is that maintaining this at the state level is more work than doing it at the national level, which is an unfortunate situation. Also I wonder if exposure notifications work across states currently, e.g. if someone from New York traveled to California.

Original Post Follows

I installed North Carolina’s SlowCOVIDNC virus exposure notification app and if you live in NC I encourage you to install it, too. You can get it from Apple App Store or Google Play.

So if you’ve been hesitating and wondering “I wonder what my computer programmer and security conscious friends think about this app?” The answer is I think you should install it.

I try harder than most people to avoid installing apps on my phone. I’m wary of apps causing problems either through incompetence or malicious intent. I think the risks in this case are low (much lower than any other random app, like a game or weather app) and the potential to reduce virus spread rate exists (and I barely even go out), so it’s worth it.

Posted in Computers | Leave a comment

Comparing Signatures on Mail-In Ballots

Voting by mail? Did you know that 28 states use signature matching to verify ballots?

They compare the signature on your ballot with one they have on file from voter registrations, ballot applications or the D.M.V. California apparently does this. NC apparently does not (we have to have two witness signatures, instead (Edit: Actually just one witness. Wasn’t it two for the primary?? Definitely appears to be just one now)).

All but 4 of those states have a process to allow voters to fix a mismatched signature. This process is sometimes referred to as “curing.”

So if you’re voting by mail, you may wish to be careful with your signature. Or if you think your signature is inconsistent and you’re in a swing state (Edit: or you’re voting for any close race that you care about) and don’t have a lot of confidence in the cure process and want to be extra sure your vote is counted then you may wish to vote in person.

Wondering what your state does? There’s a color-coded map near the bottom of this NY Times article.

I had a thought that there was some federal decision stating that all states must give voters an opportunity to fix invalid ballots, but I can’t find anything indicating such. Somewhat related, this WRAL article discusses uncertainty about how NC will handle incomplete vote-by-mail ballots.

Posted in All | Leave a comment

Two NC Races in the November Election

For those of you who might otherwise vote a straight Republican ticket in NC, I’d like to draw your attention to two races.

State Auditor

An N&O article says this about the Republican candidate: “put on probation in connection with a stalking charge, and also has been accused of refusing to obey orders from police, causing a scene at a concert and threatening a man’s family over money.” To be fair the article also says he “has not been convicted on the criminal charges.” And I guess he has a master’s degree in public administration, which is maybe relevant. But! The Democratic candidate is so much better. Beth Wood has a degree in accounting and is a CPA. She’s the incumbent (first elected in 2008) and before that worked in the state auditor’s office as well as the state treasurer’s office. So she’s experienced and qualified. And political party shouldn’t even really come into play in this office!

Sources:

Lieutenant Governor

Sure, you might like some of the things Mark Robinson says. He hits a lot of talking points Republicans care about. But he also has no experience in state politics. As a reminder, the lieutenant governor presides over the senate, and I’d argue that he is not qualified to do so. He makes a lot of offensive statements on Facebook, and also says weird things like “The looming pandemic I’m most worried about is SOCIALISM” and “When the TRULY innocent are murdered leftists could care less. In fact, they champion such” and “When will the Crips, Bloods, and Planned Parenthood start believing black lives matter?” He sounds unfocused and angry at everything. He does not sound like someone who would help a legislative body operate more effectively. The Democratic candidate Yvonne Lewis Holley, on the other hand, sounds fine. She has served in the NC house since 2013 so she’s familiar with the legislative process. The issues she cares about demonstrate compassion and I’d argue that that’s a valuable virtue.

Sources:

Posted in All | Leave a comment