Posts
-
Ad-blocking in a Manifest V3 world
I started receiving warnings on Chrome about uBlock Origin, stating, “This extension may soon no longer be supported”, after I set it up on a new computer nearly three months ago. These warnings are related to the ongoing deprecation of Manifest V2, which has been going on for a few years and is expected to be finalized by June next year. I can’t even imagine using the internet without an ad blocker, so I decided to explore the current alternatives.
To be honest, the process was a bit daunting. Some articles suggest that there’s no hope unless you switch to a browser supported by a company that doesn’t rely on ads - something that doesn’t really exist. In situations like this, I tend to try the simplest, most straightforward solution first before diving into more complex, over-the-top options. In this case, I decided to switch to the Manifest V3-based extension from the same developers: uBlock Origin Lite.
The experience was actually better than I expected. Even in “Basic” mode, which doesn’t require permission to read or change data on all sites, it worked quite well with the default filters. I did notice some empty ad boxes, as the page layouts aren’t reworked, but that’s something I’m already used to when visiting sites on mobile with AdGuard (doing the block via DNS). No need to change to the “Optimal” mode for now.
In fact, enabling the “Overlay Notices” and “Other Annoyances” filter lists made the experience even more pleasant than before. There are no more “please disable your ad-block” or “please donate to this site using Google” overlays to disrupt my browsing. So while the transition from V2 to V3 may have been a hassle for extension developers, as an end-user, I can’t say I’m unhappy with it. I’m still experiencing the (mostly) ad-free internet I was used to.
-
The cloud won, again
Since 2018 (and again in 2020), I’ve been writing about how defaulting to cloud-based solutions instead of self-hosting everything has changed my life for the better. Even knowing that for years, I still made the wrong decision to self-host a service I needed and almost doubled down on doing it again. This was until I stopped and figured out an easier way to achieve the same goal.
I’ve been a Dropbox user for nearly 15 years. It really simplified my approach to backups for personal stuff: a cloud-synced folder on my laptop where I put everything that’s not already on a cloud service. Accidentally deleted something? I can just go to their web app and restore it. The problem is that I don’t want it offering a read-write version of this folder on every device I have. Sometimes I just need a temporary folder to drop a screenshot from my Windows gaming machine so I can access it from my phone.
Resilio Sync (previously known as BitTorrent Sync) is what I was using for that. It has a few problems, including being super slow even after configuring everything possible to bypass its relay servers (spoiler: it doesn’t). Plus, it doesn’t have cloud-backed storage, so I ran an instance of it on a server to have an always-on copy of the files there. Not exactly a drop-in replacement for Dropbox, but it was still useful until I realized I was completely unhappy with its performance.
Things were reaching a point where I was considering self-hosting Nextcloud (fork of the original ownCloud) just for its file-syncing feature or even writing my own cloud-folder synchronization tool backed by S3-compatible storage. That’s when it clicked: I realized I don’t need real-time syncing for the simple use case of easily sharing single files from a computer to my phone. I just needed a way to access a Cloudflare R2 bucket from a mobile app.
After looking around, asking ChatGPT and Perplexity, I settled on S3 Files. I can upload a file from Windows using WinSCP or from a macOS/Linux terminal using s3cmd. Each machine uses a fine-grained access key I can revoke if needed. The S3 API is ubiquitous. I just needed a mobile app to access it when I’m away from my computer. It offered me the ease, speed, availability, and robustness of the cloud, which are miles ahead compared to self-hosting anything.
-
Introducing Myhro Notes
Writing is hard. I’ve been writing in this blog since 2011, when posts were still written in Portuguese - those have since been deleted. It hasn’t gotten any easier after nearly 15 years. Posts still take hours to be written, proofread and double-checked before being published. On top of that, much of the energy I have to write longer chunks of text is spent on project documentation, issues and pull request descriptions, both in and out of work-related duties. The result is that posts on this blog are becoming rare - this is the first one for the entire year.
But it doesn’t need to always be like that. Not every piece of advice or knowledge I’d like to share needs to be in a long blog article format. Sometimes I discover something useful, either through my own exploration or based on someone else’s experience, and I share it with close friends with a small comment of a few sentences. This happens on Slack workspaces, WhatsApp groups or even in direct messages. In the end, I thought: what if I write this for the wider internet and share the link with the same people, instead of writing directly to them?
Based on the concept of “blogmarks”, where small blog posts are used to share links that, in a distant past, would be bookmarked to del.icio.us, I started Myhro Notes. There’s usually one or two short paragraphs adding context, explaining why the link is interesting. They are listed by date, like a blog, but only the title is visible on the home page. The posts themselves will eventually be available on search engines. I expect my future self to be one of its users looking for content posted there.
-
Managing Podman containers with systemd
Since my early days in programming, I’ve always worried about isolated development environments. Of course, this wasn’t relevant when developing C applications with no external dependencies in Code::Blocks, but it soon became a necessity when I had to deal with Python packages through virtualenv. The same happened with Ruby versions using rbenv. Later I settled on asdf to do that with multiple Go/Node.js versions, which basically solved the problem for good for many programming languages and even some CLI tools that are sensible to versioning, like
kubectl
.But dealing with multiple runtimes or packages is just one piece of the equation in the grand schema of cleanly handling dependencies. Sometimes you have to worry about the versions of the external services a project makes use of, like a database or cache system. This is also a solved problem since Docker came into the picture over 10 years ago. I remember that the Docker Compose mantra when it launched (still called Fig) was: “no more installing Postgres on your laptop!”. This is just a bit more complicated when you don’t use the original Docker implementation, but another container management system, like Podman.
Podman offers several advantages over Docker. It can run containers without requiring
root
access; it doesn’t depend on a service daemon running all the time; and it doesn’t require a paid subscription depending on the usage. It’s a simpler tool, with an almost 1:1 compatible UI overall. The main difference is that it doesn’t seamlessly handle containers with--restart
flags. I mean, of course it does restart containers when their processes are interrupted, but they won’t be brought up after a host reboot - which tends to happen from time to time in a workstation.When looking into how to solve this problem, I realised that the
podman generate
command can create systemd service unit files. So instead of tinkering about how to integrate the two tools, figuring the file syntax and functionality, it’s possible to just create a new systemd service of the desired container as if it were any other program/process. And the best part is that we can still do that withoutroot
, thanks to systemd user services.$ podman create --name redis -p 6379:6379 docker.io/redis:7 Trying to pull docker.io/library/redis:7... (...) $ mkdir -p ~/.config/systemd/user/ $ podman generate systemd --name redis > ~/.config/systemd/user/redis.service
To increase the service’s reliability, it’s preferable to drop the
PIDFile
line from the configuration file. It typically looks like:PIDFile=/run/user/1000/containers/vfs-containers/c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb/userdata/conmon.pid
This ensures that even if the container is recreated, like when updating its image, systemd won’t be referencing its older ID, as it will only care about its name. This can be done programmatically with
sed
:$ sed -i '/PIDFile/d' ~/.config/systemd/user/redis.service
The generated file should be similar to:
# container-redis.service # autogenerated by Podman 4.3.1 # Sat Dec 23 17:18:01 -03 2023 [Unit] Description=Podman container-redis.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start redis ExecStop=/usr/bin/podman stop \ -t 10 redis ExecStopPost=/usr/bin/podman stop \ -t 10 redis Type=forking [Install] WantedBy=default.target
The final step consists in starting the service and enabling it to launch on boot:
$ systemctl --user start redis $ systemctl --user enable redis Created symlink /home/myhro/.config/systemd/user/default.target.wants/redis.service → /home/myhro/.config/systemd/user/redis.service. $ systemctl --user status redis ● redis.service - Podman container-redis.service Loaded: loaded (/home/myhro/.config/systemd/user/redis.service; enabled; preset: enabled) Active: active (running) since Sat 2023-12-23 17:25:40 -03; 13s ago Docs: man:podman-generate-systemd(1) Tasks: 17 (limit: 37077) Memory: 11.1M CPU: 60ms CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/redis.service ├─46851 /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/netns-102ff957-157c-adcb-bd4a-45e7e0d2a50f tap0 ├─46853 rootlessport ├─46859 rootlessport-child └─46869 /usr/bin/conmon --api-version 1 -c c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -u c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -r /usr/bin/crun -b /home/myhro/.local/share/(...) Dec 23 17:25:40 leptok redis[46869]: 1:C 23 Dec 2023 20:25:40.071 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * monotonic clock: POSIX clock_gettime Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Running mode=standalone, port=6379. Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Server initialized Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Loading RDB produced by version 7.2.3 Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB age 18 seconds Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB memory usage when created 0.83 Mb Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Done loading RDB, keys loaded: 0, keys expired: 0. Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * DB loaded from disk: 0.000 seconds Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Ready to accept connections tcp
In summary, I quite liked how easy it was to leverage the strengths of both the Podman and systemd in their integration. Being able to do that in a rootless way is definitely a huge plus. Before doing that, I always believed that managing Linux services was a root-only thing. But now that I think about it, I realize that when Docker was the only game in town, managing containers also required elevated privileges. I’m glad that we are moving away from this idea, piece by piece.
-
MikroTik hAP ac3 Review
As I mentioned in the previous review, my experience with the MikroTik router that only supported wired networking encouraged me to look for a Wi-Fi one. After browsing the available models, the slogan of a particular one, the hAP ac3, caught my attention:
Forget about endless searching for the perfect router and scrolling through an eternity of reviews and specifications! We have created a single affordable home access point that has all the features you might need for years to come.
A highly configurable router, with gigabit wired networking and dual-band 2.4 and 5 GHz Wi-Fi at an affordable price (abroad, where it costs US$ 99, not here where it costs R$ 800)? It seemed like exactly what I was looking for.
Findings
After spending 3 hours configuring the wired router, I thought to myself: “Ah, now that I know how MikroTik works, it’ll be easy. 15 or 20 minutes and everything will be working.” What a mistake. The interface is literally the same with an additional
Wireless
option in the menu, but even setting a password on the unprotected Wi-Fi network was challenging. I really scratched my head trying to understand how things worked and spent another 2 hours configuring it in the way I wanted.Configuring the 5 GHz Wi-Fi transmission, in particular, was quite difficult. It has a “radar detection” system to use higher frequencies (5.5-5.6 GHz) that takes literally 10 minutes (!) on each boot to decide which one to use, time in which the wireless network remains unavailable during this process. To avoid frustration, I manually chose a lower frequency option (5.1-5.2 GHz).
After everything was configured, I noticed that the 5 GHz Wi-Fi signal was weaker in other rooms than it used to be with my TP-Link router. Weak enough for iOS to automatically fall back to 4G. Along with the weaker signal came a drop in connection speed. On my MacBook, it fell from 400 to 200 Mbps, and on my iPhone from 200 to 100 Mbps, both measured at Fast.com in other rooms, compared to the TP-Link router I intended to replace. Although that would be sufficient bandwidth to cover most of my use cases, it seemed unacceptable to downgrade the speed I was used to, given the price and quality I expected from the device.
The solution was to go back to a setup identical to the wired MikroTik: connecting the TP-Link router to the new MikroTik and using only the Wi-Fi from the former. In router mode, the speed loss was the same. In access point mode, I achieved the same speed as before when connecting the TP-Link directly to the modem or the wired MikroTik.
Conclusion
It wasn’t a very wise decision to buy a more expensive model because of Wi-Fi and ultimately not use it, but the experience was valuable. It still solves my Dual WAN support issue, albeit in a less than ideal way, and I could return the borrowed MikroTik. I couldn’t exactly pinpoint why its 5 GHz network was so much slower than the TP-Link, but I’ve encountered similar situations caused by software (as the same has happened to me with DD-WRT) in a not-so distant past. It’s not what I expected from MikroTik, a device whose software is precisely its selling point, but who knows. Today, if I were to set up the same system without having the TP-Link router available, I would get a simpler wired MikroTik and connect a Unifi AP to it. It would be the best of both worlds and the cost would be virtually the same.