Posts

  • The cloud won, again

    Since 2018 (and again in 2020), I’ve been writing about how defaulting to cloud-based solutions instead of self-hosting everything has changed my life for the better. Even knowing that for years, I still made the wrong decision to self-host a service I needed and almost doubled down on doing it again. This was until I stopped and figured out an easier way to achieve the same goal.

    I’ve been a Dropbox user for nearly 15 years. It really simplified my approach to backups for personal stuff: a cloud-synced folder on my laptop where I put everything that’s not already on a cloud service. Accidentally deleted something? I can just go to their web app and restore it. The problem is that I don’t want it offering a read-write version of this folder on every device I have. Sometimes I just need a temporary folder to drop a screenshot from my Windows gaming machine so I can access it from my phone.

    Resilio Sync (previously known as BitTorrent Sync) is what I was using for that. It has a few problems, including being super slow even after configuring everything possible to bypass its relay servers (spoiler: it doesn’t). Plus, it doesn’t have cloud-backed storage, so I ran an instance of it on a server to have an always-on copy of the files there. Not exactly a drop-in replacement for Dropbox, but it was still useful until I realized I was completely unhappy with its performance.

    Things were reaching a point where I was considering self-hosting Nextcloud (fork of the original ownCloud) just for its file-syncing feature or even writing my own cloud-folder synchronization tool backed by S3-compatible storage. That’s when it clicked: I realized I don’t need real-time syncing for the simple use case of easily sharing single files from a computer to my phone. I just needed a way to access a Cloudflare R2 bucket from a mobile app.

    After looking around, asking ChatGPT and Perplexity, I settled on S3 Files. I can upload a file from Windows using WinSCP or from a macOS/Linux terminal using s3cmd. Each machine uses a fine-grained access key I can revoke if needed. The S3 API is ubiquitous. I just needed a mobile app to access it when I’m away from my computer. It offered me the ease, speed, availability, and robustness of the cloud, which are miles ahead compared to self-hosting anything.

  • Introducing Myhro Notes

    Writing is hard. I’ve been writing in this blog since 2011, when posts were still written in Portuguese - those have since been deleted. It hasn’t gotten any easier after nearly 15 years. Posts still take hours to be written, proofread and double-checked before being published. On top of that, much of the energy I have to write longer chunks of text is spent on project documentation, issues and pull request descriptions, both in and out of work-related duties. The result is that posts on this blog are becoming rare - this is the first one for the entire year.

    But it doesn’t need to always be like that. Not every piece of advice or knowledge I’d like to share needs to be in a long blog article format. Sometimes I discover something useful, either through my own exploration or based on someone else’s experience, and I share it with close friends with a small comment of a few sentences. This happens on Slack workspaces, WhatsApp groups or even in direct messages. In the end, I thought: what if I write this for the wider internet and share the link with the same people, instead of writing directly to them?

    Based on the concept of “blogmarks”, where small blog posts are used to share links that, in a distant past, would be bookmarked to del.icio.us, I started Myhro Notes. There’s usually one or two short paragraphs adding context, explaining why the link is interesting. They are listed by date, like a blog, but only the title is visible on the home page. The posts themselves will eventually be available on search engines. I expect my future self to be one of its users looking for content posted there.

  • Managing Podman containers with systemd

    Since my early days in programming, I’ve always worried about isolated development environments. Of course, this wasn’t relevant when developing C applications with no external dependencies in Code::Blocks, but it soon became a necessity when I had to deal with Python packages through virtualenv. The same happened with Ruby versions using rbenv. Later I settled on asdf to do that with multiple Go/Node.js versions, which basically solved the problem for good for many programming languages and even some CLI tools that are sensible to versioning, like kubectl.

    But dealing with multiple runtimes or packages is just one piece of the equation in the grand schema of cleanly handling dependencies. Sometimes you have to worry about the versions of the external services a project makes use of, like a database or cache system. This is also a solved problem since Docker came into the picture over 10 years ago. I remember that the Docker Compose mantra when it launched (still called Fig) was: “no more installing Postgres on your laptop!”. This is just a bit more complicated when you don’t use the original Docker implementation, but another container management system, like Podman.

    Podman offers several advantages over Docker. It can run containers without requiring root access; it doesn’t depend on a service daemon running all the time; and it doesn’t require a paid subscription depending on the usage. It’s a simpler tool, with an almost 1:1 compatible UI overall. The main difference is that it doesn’t seamlessly handle containers with --restart flags. I mean, of course it does restart containers when their processes are interrupted, but they won’t be brought up after a host reboot - which tends to happen from time to time in a workstation.

    When looking into how to solve this problem, I realised that the podman generate command can create systemd service unit files. So instead of tinkering about how to integrate the two tools, figuring the file syntax and functionality, it’s possible to just create a new systemd service of the desired container as if it were any other program/process. And the best part is that we can still do that without root, thanks to systemd user services.

    $ podman create --name redis -p 6379:6379 docker.io/redis:7
    Trying to pull docker.io/library/redis:7...
    (...)
    $ mkdir -p ~/.config/systemd/user/
    $ podman generate systemd --name redis > ~/.config/systemd/user/redis.service
    

    To increase the service’s reliability, it’s preferable to drop the PIDFile line from the configuration file. It typically looks like:

    PIDFile=/run/user/1000/containers/vfs-containers/c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb/userdata/conmon.pid
    

    This ensures that even if the container is recreated, like when updating its image, systemd won’t be referencing its older ID, as it will only care about its name. This can be done programmatically with sed:

    $ sed -i '/PIDFile/d' ~/.config/systemd/user/redis.service
    

    The generated file should be similar to:

    # container-redis.service
    # autogenerated by Podman 4.3.1
    # Sat Dec 23 17:18:01 -03 2023
    
    [Unit]
    Description=Podman container-redis.service
    Documentation=man:podman-generate-systemd(1)
    Wants=network-online.target
    After=network-online.target
    RequiresMountsFor=/run/user/1000/containers
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=on-failure
    TimeoutStopSec=70
    ExecStart=/usr/bin/podman start redis
    ExecStop=/usr/bin/podman stop  \
            -t 10 redis
    ExecStopPost=/usr/bin/podman stop  \
            -t 10 redis
    Type=forking
    
    [Install]
    WantedBy=default.target
    

    The final step consists in starting the service and enabling it to launch on boot:

    $ systemctl --user start redis
    $ systemctl --user enable redis
    Created symlink /home/myhro/.config/systemd/user/default.target.wants/redis.service → /home/myhro/.config/systemd/user/redis.service.
    $ systemctl --user status redis
    ● redis.service - Podman container-redis.service
         Loaded: loaded (/home/myhro/.config/systemd/user/redis.service; enabled; preset: enabled)
         Active: active (running) since Sat 2023-12-23 17:25:40 -03; 13s ago
           Docs: man:podman-generate-systemd(1)
          Tasks: 17 (limit: 37077)
         Memory: 11.1M
            CPU: 60ms
         CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/redis.service
                 ├─46851 /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/netns-102ff957-157c-adcb-bd4a-45e7e0d2a50f tap0
                 ├─46853 rootlessport
                 ├─46859 rootlessport-child
                 └─46869 /usr/bin/conmon --api-version 1 -c c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -u c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -r /usr/bin/crun -b /home/myhro/.local/share/(...)
    
    Dec 23 17:25:40 leptok redis[46869]: 1:C 23 Dec 2023 20:25:40.071 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * monotonic clock: POSIX clock_gettime
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Running mode=standalone, port=6379.
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Server initialized
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Loading RDB produced by version 7.2.3
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB age 18 seconds
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB memory usage when created 0.83 Mb
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Done loading RDB, keys loaded: 0, keys expired: 0.
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * DB loaded from disk: 0.000 seconds
    Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Ready to accept connections tcp
    

    In summary, I quite liked how easy it was to leverage the strengths of both the Podman and systemd in their integration. Being able to do that in a rootless way is definitely a huge plus. Before doing that, I always believed that managing Linux services was a root-only thing. But now that I think about it, I realize that when Docker was the only game in town, managing containers also required elevated privileges. I’m glad that we are moving away from this idea, piece by piece.

  • MikroTik hAP ac3 Review

    As I mentioned in the previous review, my experience with the MikroTik router that only supported wired networking encouraged me to look for a Wi-Fi one. After browsing the available models, the slogan of a particular one, the hAP ac3, caught my attention:

    Forget about endless searching for the perfect router and scrolling through an eternity of reviews and specifications! We have created a single affordable home access point that has all the features you might need for years to come.

    A highly configurable router, with gigabit wired networking and dual-band 2.4 and 5 GHz Wi-Fi at an affordable price (abroad, where it costs US$ 99, not here where it costs R$ 800)? It seemed like exactly what I was looking for.

    Findings

    After spending 3 hours configuring the wired router, I thought to myself: “Ah, now that I know how MikroTik works, it’ll be easy. 15 or 20 minutes and everything will be working.” What a mistake. The interface is literally the same with an additional Wireless option in the menu, but even setting a password on the unprotected Wi-Fi network was challenging. I really scratched my head trying to understand how things worked and spent another 2 hours configuring it in the way I wanted.

    Configuring the 5 GHz Wi-Fi transmission, in particular, was quite difficult. It has a “radar detection” system to use higher frequencies (5.5-5.6 GHz) that takes literally 10 minutes (!) on each boot to decide which one to use, time in which the wireless network remains unavailable during this process. To avoid frustration, I manually chose a lower frequency option (5.1-5.2 GHz).

    After everything was configured, I noticed that the 5 GHz Wi-Fi signal was weaker in other rooms than it used to be with my TP-Link router. Weak enough for iOS to automatically fall back to 4G. Along with the weaker signal came a drop in connection speed. On my MacBook, it fell from 400 to 200 Mbps, and on my iPhone from 200 to 100 Mbps, both measured at Fast.com in other rooms, compared to the TP-Link router I intended to replace. Although that would be sufficient bandwidth to cover most of my use cases, it seemed unacceptable to downgrade the speed I was used to, given the price and quality I expected from the device.

    The solution was to go back to a setup identical to the wired MikroTik: connecting the TP-Link router to the new MikroTik and using only the Wi-Fi from the former. In router mode, the speed loss was the same. In access point mode, I achieved the same speed as before when connecting the TP-Link directly to the modem or the wired MikroTik.

    Conclusion

    It wasn’t a very wise decision to buy a more expensive model because of Wi-Fi and ultimately not use it, but the experience was valuable. It still solves my Dual WAN support issue, albeit in a less than ideal way, and I could return the borrowed MikroTik. I couldn’t exactly pinpoint why its 5 GHz network was so much slower than the TP-Link, but I’ve encountered similar situations caused by software (as the same has happened to me with DD-WRT) in a not-so distant past. It’s not what I expected from MikroTik, a device whose software is precisely its selling point, but who knows. Today, if I were to set up the same system without having the TP-Link router available, I would get a simpler wired MikroTik and connect a Unifi AP to it. It would be the best of both worlds and the cost would be virtually the same.

  • MikroTik hEX (RB750Gr3) Review

    The MikroTik hEX (RB750Gr3) is a simple wired router - one of the cheapest from the Latvian company - that surely does its job. It’s really flexible, which is simultaneously both a pro and a con. It’s the router with the largest amount of configuration options that I ever seen, including the possibility of making any sort of LAN/WAN combination from the five ports available.

    Impressions

    • One of MikroTik’s RouterOS biggest features are the countless configuration options in its GUI, both though web and via the native application WinBox. The problem is that a considerable chunk of its documentation, both in the official channels and from random tips scattered around the Internet, is focused on its CLI (called just Terminal).
    • It has a non-standard factory reset process. One has to hold its reset button, which is super thin and can’t be reached with a pen, as soon as the device is connected to the power supply. It’s not enough to just hold the reset button anytime.
    • Its configuration options are flexible, incredibly flexible, to the point that it doesn’t prevent the user from doing a catastrophic and irreversible change. In the first time I was setting up the LAN bridge options, I somehow removed the port in which the router IP (192.168.88.1) was associated with. This made it completely lose connectivity and there was no way to access its web UI ever again. Had to reset it to restore access after that.
    • The wizard configuration system called Quick Set offers very few options for people who want to configure the router as quick as possible. And the same time, it does too much magic under the hood, resulting in possible headaches in the future. I don’t recommend using it.
    • After resetting the device a couple times and configure everything by hand in the WebFig, its web GUI, I scratched my head to understand how to access this interface after connecting through a Wi-Fi router, which I had to use given it only offers wired connections.
      • As the Wi-Fi router was giving me an IP in the 192.168.0.0/24 range, I wasn’t able to access the router at 192.168.88.1, even though the Wi-Fi router could reach it. I was only able to access it after manually adding the 192.168.0.0/16 range as allowed to the admin user. When using Quick Set this isn’t needed, as this configuration is done without ever informing the user.
    • Some options aren’t available in the WebFig interface, like changing the MAC address of the ethernet ports directly. At the same time the Quick Set does this (maybe via CLI in the background), suggesting that this is indeed possible. A workaround for that is to create a single-port bridge and change the MAC address of this virtual interface.
    • It’s super easy to update the device, considering both the RouterOS and its firmware itself, which are two separate processes. Given that internet access is properly configured, all that is required are a couple clicks in the interface and a reboot to perform each one of them.

    Conclusion

    It’s not a router that I would recommend for the faint of heart nor people who are not ready to face a few frustrations. Even I, being someone used to configure routers even before I was 15 years old, scratched my head to understand how a few things work and spent at least 3 hours to leave it as close as possible from what I wanted. Even though, I wasn’t able to configure the Dual-WAN option with a stand-by connection that is automatically activated when the main one goes down. Via the web UI this didn’t work right and via CLI it looked like too much of a hassle.

    In the end I was able to manage both connections manually, accessing the WebFig and deactivating one while re-activating the other. This is still better than physically switching the cable from one modem to the other, and still having to access the configuration page to switch between DHCP and PPPoE in the Wi-Fi router solely WAN port.

    I’m considering trying out a MikroTik Wi-Fi router, given that having fewer devices involved might simplify the setup. Having one less device connected to the uninterruptible power supply will also probably improve its autonomy.