Posts

  • Managing Podman containers with systemd

    Since my early days in programming, I’ve always worried about isolated development environments. Of course, this wasn’t relevant when developing C applications with no external dependencies in Code::Blocks, but it soon became a necessity when I had to deal with Python packages through virtualenv. The same happened with Ruby versions using rbenv. Later I settled on asdf to do that with multiple Go/Node.js versions, which basically solved the problem for good for many programming languages and even some CLI tools that are sensible to versioning, like kubectl.

    But dealing with multiple runtimes or packages is just one piece of the equation in the grand schema of cleanly handling dependencies. Sometimes you have to worry about the versions of the external services a project makes use of, like a database or cache system. This is also a solved problem since Docker came into the picture over 10 years ago. I remember that the Docker Compose mantra when it launched (still called Fig) was: “no more installing Postgres on your laptop!”. This is just a bit more complicated when you don’t use the original Docker implementation, but another container management system, like Podman.

    Podman offers several advantages over Docker. It can run containers without requiring root access; it doesn’t depend on a service daemon running all the time; and it doesn’t require a paid subscription depending on the usage. It’s a simpler tool, with an almost 1:1 compatible UI overall. The main difference is that it doesn’t seamlessly handle containers with --restart flags. I mean, of course it does restart containers when their processes are interrupted, but they won’t be brought up after a host reboot - which tends to happen from time to time in a workstation.

    When looking into how to solve this problem, I realised that the podman generate command can create systemd service unit files. So instead of tinkering about how to integrate the two tools, figuring the file syntax and functionality, it’s possible to just create a new systemd service of the desired container as if it were any other program/process. And the best part is that we can still do that without root, thanks to systemd user services.

    $ podman create --name redis -p 6379:6379 docker.io/redis:7
    Trying to pull docker.io/library/redis:7...
    (...)
    $ mkdir -p ~/.config/systemd/user/
    $ podman generate systemd --name redis > ~/.config/systemd/user/redis.service
    

    To increase the service’s reliability, it’s preferable to drop the PIDFile line from the configuration file. It typically looks like:

    PIDFile=/run/user/1000/containers/vfs-containers/c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb/userdata/conmon.pid
    

    This ensures that even if the container is recreated, like when updating its image, systemd won’t be referencing its older ID, as it will only care about its name. This can be done programmatically with sed:

    $ sed -i '/PIDFile/d' ~/.config/systemd/user/redis.service
    

    The generated file should be similar to:

    # container-redis.service
    # autogenerated by Podman 4.3.1
    # Sat Dec 23 17:18:01 -03 2023
    
    [Unit]
    Description=Podman container-redis.service
    Documentation=man:podman-generate-systemd(1)
    Wants=network-online.target
    After=network-online.target
    RequiresMountsFor=/run/user/1000/containers
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=on-failure
    TimeoutStopSec=70
    ExecStart=/usr/bin/podman start redis
    ExecStop=/usr/bin/podman stop  \
            -t 10 redis
    ExecStopPost=/usr/bin/podman stop  \
            -t 10 redis
    Type=forking
    
    [Install]
    WantedBy=default.target
    

    The final step consists in starting the service and enabling it to launch on boot:

    $ systemctl --user start redis
    $ systemctl --user enable redis
    Created symlink /home/myhro/.config/systemd/user/default.target.wants/redis.service → /home/myhro/.config/systemd/user/redis.service.
    $ systemctl --user status redis
    ● redis.service - Podman container-redis.service
         Loaded: loaded (/home/myhro/.config/systemd/user/redis.service; enabled; preset: enabled)
         Active: active (running) since Sat 2023-12-23 17:25:40 -03; 13s ago
           Docs: man:podman-generate-systemd(1)
          Tasks: 17 (limit: 37077)
         Memory: 11.1M
            CPU: 60ms
         CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/redis.service
                 ├─46851 /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/netns-102ff957-157c-adcb-bd4a-45e7e0d2a50f tap0
                 ├─46853 rootlessport
                 ├─46859 rootlessport-child
                 └─46869 /usr/bin/conmon --api-version 1 -c c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -u c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -r /usr/bin/crun -b /home/myhro/.local/share/(...)
    
    Dec 23 17:25:40 leptok redis[46869]: 1:C 23 Dec 2023 20:25:40.071 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * monotonic clock: POSIX clock_gettime
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Running mode=standalone, port=6379.
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Server initialized
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Loading RDB produced by version 7.2.3
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB age 18 seconds
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB memory usage when created 0.83 Mb
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Done loading RDB, keys loaded: 0, keys expired: 0.
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * DB loaded from disk: 0.000 seconds
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Ready to accept connections tcp
    

    In summary, I quite liked how easy it was to leverage the strengths of both the Podman and systemd in their integration. Being able to do that in a rootless way is definitely a huge plus. Before doing that, I always believed that managing Linux services was a root-only thing. But now that I think about it, I realize that when Docker was the only game in town, managing containers also required elevated privileges. I’m glad that we are moving away from this idea, piece by piece.

  • MikroTik hAP ac3 Review

    As I mentioned in the previous review, my experience with the MikroTik router that only supported wired networking encouraged me to look for a Wi-Fi one. After browsing the available models, the slogan of a particular one, the hAP ac3, caught my attention:

    Forget about endless searching for the perfect router and scrolling through an eternity of reviews and specifications! We have created a single affordable home access point that has all the features you might need for years to come.

    A highly configurable router, with gigabit wired networking and dual-band 2.4 and 5 GHz Wi-Fi at an affordable price (abroad, where it costs US$ 99, not here where it costs R$ 800)? It seemed like exactly what I was looking for.

    Findings

    After spending 3 hours configuring the wired router, I thought to myself: “Ah, now that I know how MikroTik works, it’ll be easy. 15 or 20 minutes and everything will be working.” What a mistake. The interface is literally the same with an additional Wireless option in the menu, but even setting a password on the unprotected Wi-Fi network was challenging. I really scratched my head trying to understand how things worked and spent another 2 hours configuring it in the way I wanted.

    Configuring the 5 GHz Wi-Fi transmission, in particular, was quite difficult. It has a “radar detection” system to use higher frequencies (5.5-5.6 GHz) that takes literally 10 minutes (!) on each boot to decide which one to use, time in which the wireless network remains unavailable during this process. To avoid frustration, I manually chose a lower frequency option (5.1-5.2 GHz).

    After everything was configured, I noticed that the 5 GHz Wi-Fi signal was weaker in other rooms than it used to be with my TP-Link router. Weak enough for iOS to automatically fall back to 4G. Along with the weaker signal came a drop in connection speed. On my MacBook, it fell from 400 to 200 Mbps, and on my iPhone from 200 to 100 Mbps, both measured at Fast.com in other rooms, compared to the TP-Link router I intended to replace. Although that would be sufficient bandwidth to cover most of my use cases, it seemed unacceptable to downgrade the speed I was used to, given the price and quality I expected from the device.

    The solution was to go back to a setup identical to the wired MikroTik: connecting the TP-Link router to the new MikroTik and using only the Wi-Fi from the former. In router mode, the speed loss was the same. In access point mode, I achieved the same speed as before when connecting the TP-Link directly to the modem or the wired MikroTik.

    Conclusion

    It wasn’t a very wise decision to buy a more expensive model because of Wi-Fi and ultimately not use it, but the experience was valuable. It still solves my Dual WAN support issue, albeit in a less than ideal way, and I could return the borrowed MikroTik. I couldn’t exactly pinpoint why its 5 GHz network was so much slower than the TP-Link, but I’ve encountered similar situations caused by software (as the same has happened to me with DD-WRT) in a not-so distant past. It’s not what I expected from MikroTik, a device whose software is precisely its selling point, but who knows. Today, if I were to set up the same system without having the TP-Link router available, I would get a simpler wired MikroTik and connect a Unifi AP to it. It would be the best of both worlds and the cost would be virtually the same.

  • MikroTik hEX (RB750Gr3) Review

    The MikroTik hEX (RB750Gr3) is a simple wired router - one of the cheapest from the Latvian company - that surely does its job. It’s really flexible, which is simultaneously both a pro and a con. It’s the router with the largest amount of configuration options that I ever seen, including the possibility of making any sort of LAN/WAN combination from the five ports available.

    Impressions

    • One of MikroTik’s RouterOS biggest features are the countless configuration options in its GUI, both though web and via the native application WinBox. The problem is that a considerable chunk of its documentation, both in the official channels and from random tips scattered around the Internet, is focused on its CLI (called just Terminal).
    • It has a non-standard factory reset process. One has to hold its reset button, which is super thin and can’t be reached with a pen, as soon as the device is connected to the power supply. It’s not enough to just hold the reset button anytime.
    • Its configuration options are flexible, incredibly flexible, to the point that it doesn’t prevent the user from doing a catastrophic and irreversible change. In the first time I was setting up the LAN bridge options, I somehow removed the port in which the router IP (192.168.88.1) was associated with. This made it completely lose connectivity and there was no way to access its web UI ever again. Had to reset it to restore access after that.
    • The wizard configuration system called Quick Set offers very few options for people who want to configure the router as quick as possible. And the same time, it does too much magic under the hood, resulting in possible headaches in the future. I don’t recommend using it.
    • After resetting the device a couple times and configure everything by hand in the WebFig, its web GUI, I scratched my head to understand how to access this interface after connecting through a Wi-Fi router, which I had to use given it only offers wired connections.
      • As the Wi-Fi router was giving me an IP in the 192.168.0.0/24 range, I wasn’t able to access the router at 192.168.88.1, even though the Wi-Fi router could reach it. I was only able to access it after manually adding the 192.168.0.0/16 range as allowed to the admin user. When using Quick Set this isn’t needed, as this configuration is done without ever informing the user.
    • Some options aren’t available in the WebFig interface, like changing the MAC address of the ethernet ports directly. At the same time the Quick Set does this (maybe via CLI in the background), suggesting that this is indeed possible. A workaround for that is to create a single-port bridge and change the MAC address of this virtual interface.
    • It’s super easy to update the device, considering both the RouterOS and its firmware itself, which are two separate processes. Given that internet access is properly configured, all that is required are a couple clicks in the interface and a reboot to perform each one of them.

    Conclusion

    It’s not a router that I would recommend for the faint of heart nor people who are not ready to face a few frustrations. Even I, being someone used to configure routers even before I was 15 years old, scratched my head to understand how a few things work and spent at least 3 hours to leave it as close as possible from what I wanted. Even though, I wasn’t able to configure the Dual-WAN option with a stand-by connection that is automatically activated when the main one goes down. Via the web UI this didn’t work right and via CLI it looked like too much of a hassle.

    In the end I was able to manage both connections manually, accessing the WebFig and deactivating one while re-activating the other. This is still better than physically switching the cable from one modem to the other, and still having to access the configuration page to switch between DHCP and PPPoE in the Wi-Fi router solely WAN port.

    I’m considering trying out a MikroTik Wi-Fi router, given that having fewer devices involved might simplify the setup. Having one less device connected to the uninterruptible power supply will also probably improve its autonomy.

  • Importing CSV files with SQLite

    GitHub offers a very superficial view of how GitHub Actions runners are spending their minutes on private repositories. Currently, the only way to get detailed information about it is via the Get usage report button in the project/organization billing page. The only problem is that the generated report is a CSV file, shifting the responsibility of filtering and visualizing data to the user. While it’s true that most of the users of this report are used to deal with CSV files, be them developers or accountants experts in handling spreadsheets, this is definitely not the most user-friendly way of offering insights into billing data.

    When facing this issue, at first I thought about using harelba/q to query the CSV files directly in the command line. The problem is that q isn’t that straightforward to install, as apparently it is not available via apt nor pip, nor one is able to easily change the data once it’s imported, like in a regular database. In the first time I resorted to create a database on PostgreSQL and import the CSV file into it, but after that I never remember the CSV import syntax and it still requires a daemon running just for that. I kept thinking that there should be a simpler way: what if I use SQLite for that?

    In order to not have to CAST() each TEXT column whenever working with dates or numbers, the following schema.sql can be used:

    CREATE TABLE billing (
      date DATE,
      product TEXT,
      repository TEXT,
      quantity NUMERIC,
      unity TEXT,
      price NUMERIC,
      workflow TEXT,
      notes TEXT
    );
    

    After that, it’s possible to import the CSV file with the sqlite3 CLI tool. The --skip 1 argument to the .import command is needed to avoid importing the CSV header as data, given that SQLite considers it to be a regular row when the table already exists:

    $ sqlite3 github.db
    SQLite version 3.36.0 2021-06-18 18:58:49
    Enter ".help" for usage hints.
    sqlite> .read schema.sql
    sqlite> .mode csv
    sqlite> .import --skip 1 c2860a05_2021-12-23_01.csv billing
    sqlite> SELECT COUNT(*) FROM billing;
    1834
    

    Now it’s easy to dig into the billing data. In order to have a better presentation, .mode column can be enabled to both show the column names and align their output. We can, for instance, find out which workflows consumed most minutes in the last week and their respective repositories:

    sqlite> .mode column
    sqlite> SELECT date, repository, workflow, quantity FROM billing WHERE date > date('now', '-7 days') AND product = 'actions' ORDER BY quantity DESC LIMIT 5;
    date        repository         workflow                              quantity
    ----------  -----------------  ------------------------------------  --------
    2021-12-21  contoso/api        .github/workflows/main.yml            392
    2021-12-18  contoso/terraform  .github/workflows/staging-images.yml  361
    2021-12-22  contoso/api        .github/workflows/main.yml            226
    2021-12-21  contoso/api        .github/workflows/qa.yml              185
    2021-12-20  contoso/api        .github/workflows/main.yml            140
    

    Another important example of the data that can be fetched is the cost per repository in the last week, summing the cost of all their workflows. An UPDATE statement is required to apply a small data fix, given that the CSV contains a dollar sign $ in the rows of the price column that needs to be dropped:

    sqlite> UPDATE billing SET price = REPLACE(price, '$', '');
    sqlite> SELECT repository, SUM(quantity) * price AS amount FROM billing WHERE date > date('now', '-7 days') AND product = 'actions' GROUP BY repository;
    repository          amount
    ------------------  ------
    contoso/api         11.68
    contoso/public-web  0.128
    contoso/status      1.184
    contoso/terraform   2.92
    contoso/webapp      0.6
    

    Not intuitive as a web page where one can just click around to filter and sort a report, but definitely doable. As a side note, one cool aspect of SQLite is that it doesn’t require a file database do be used. If started as sqlite3, with no arguments, all of it’s storage needs are handled entirely in memory. This makes it even more interesting for data exploration cases like these, offering all of its queries capabilities without ever persisting data to disk.

  • Configuring firewalld on Debian Bullseye

    After doing a clean Debian 11 (Bullseye) installation on a new machine, the next step after installing basic CLI tools and disabling SSH root/password logins was to configure its firewall. It’s easy to imagine how big was my surprise when I found out that the iptables command wasn’t available. While it’s known for at least 5 years that this was going to happen, it still took me some time to let the idea of its deprecation sink and actually digest the situation. I scratched my head a bit wondering if the day I would be obliged to learn how to use nftables had finally came.

    While looking for some guidance on what are the best practices to manage firewall rules these days, I found the article “What to expect in Debian 11 Bullseye for nftables/iptables”, which explains the situation in a straightforward way. The article ends up suggesting that firewalld is supposed to be the default firewall rules wrapper/manager - something that is news to me. I never met the author while actively working on Debian, but I do know he’s the maintainer of multiple firewall-related packages in the distribution and also works on the netfilter project itself. Based on these credentials, I took the advice knowing it came from someone who knows what they are doing.

    A fun fact is that the iptables package is actually a dependency for firewalld on Debian Bullseye. This should not be the case on future releases. After installing it, I went for the simplest goal ever: block all incoming connections while allowing SSH (and preferably Mosh, if possible). Before doing any changes, I tried to familiarize myself with the basic commands. I won’t repeat what multiple other sources say, so I suggest this Digital Ocean article that explains firewalld concepts, like zones and rules persistency.

    In summary, what one needs to understand is that there are multiple “zones” within firewalld. Each one can have different sets of rules. In order to simplify the setup, I checked what was the default zone, added the network interface adapter to it and defined the needed rules there. No need for further granularity in this use case. Here, the default zone is the one named public:

    $ sudo firewall-cmd --get-default-zone
    public
    $ sudo firewall-cmd --list-all
    public
      target: default
      icmp-block-inversion: no
      interfaces:
      sources:
      services: dhcpv6-client ssh
      ports:
      protocols:
      forward: no
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:
    

    Knowing that, it was quite simple to associate the internet-connected network interface to it and update the list of allowed services. dhcpv6-client is going to be removed because this machine isn’t on an IPv6-enabled network:

    $ sudo firewall-cmd --change-interface eth0
    success
    $ sudo firewall-cmd --add-service mosh
    success
    $ sudo firewall-cmd --remove-service dhcpv6-client
    success
    

    It’s important to execute sudo firewall-cmd --runtime-to-permanent after confirming the rules where defined as expected, otherwise they would be lost on service/machine restarts:

    $ sudo firewall-cmd --list-all
    public (active)
      target: default
      icmp-block-inversion: no
      interfaces: eth0
      sources:
      services: mosh ssh
      ports:
      protocols:
      forward: no
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:
    $ sudo firewall-cmd --runtime-to-permanent
    success
    

    A side effect of the target: default setting is that it REJECTs packets by default, instead of DROPing them. This basically informs the client that any connections were actively rejected instead of silently dropping the packets - the latter which might be preferable. It’s confusing why it’s called default instead of REJECT, and also not clear if it’s actually possible to change the default behavior. In any case, it’s possible to explicitly change it:

    $ sudo firewall-cmd --set-target DROP --permanent
    success
    $ sudo firewall-cmd --reload
    success
    

    The --set-target option requires the --permanent flag, but it doesn’t apply the changes instantly, requiring them to be reloaded.

    An implication of dropping everything is that ICMP packets are blocked as well, preventing the machine from answering ping requests. The way this can be configured is a bit confusing, given that the logic is flipped. There’s a need to enable icmp-block-inversion and add (which in practice would be removing it) an ICMP block for echo-request:

    $ sudo firewall-cmd --add-icmp-block-inversion
    success
    $ sudo firewall-cmd --add-icmp-block echo-request
    success
    

    The result will look like this, always remembering to persist the changes:

    $ sudo firewall-cmd --list-all
    public (active)
      target: DROP
      icmp-block-inversion: yes
      interfaces: eth0
      sources:
      services: mosh ssh
      ports:
      protocols:
      forward: no
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks: echo-request
      rich rules:
    $ sudo firewall-cmd --runtime-to-permanent
    success
    

    For someone who hadn’t used firewalld before, I can say it was OK to use it in this simple use case. There was no need to learn the syntax for nft commands nor the one for nftables rules and it worked quite well in the end. The process of unblocking ICMP ping requests is a bit cumbersome with the flipped logic, and could have been made simpler, but it’s still doable. All-in-all I’m happy with the solution and will look forward how to use it, for instance, in a non-interactive way with Ansible.

Subscribe via RSS