Posts
-
Full Disk Encryption on OVH UEFI Servers
For a while, it has been possible to boot OVH servers in rescue mode and, using QEMU, install an OS in any way you want with its regular installation ISO. This includes unsupported modes such as a non-standard filesystem on
/
or using full disk encryption. The only requirement was to map the disks to regular QEMU devices, such as-hda /dev/sda
arguments. I don’t remember exactly when I first did that, but there are references to this process in their community forums dating back to at least 2020. I might have seen someone mention it in a blog post a few years earlier.Since I usually opt for their cheaper line of servers, from Kimsufi or SoYouStart, I’m used to old hardware. The server I’m currently replacing, for instance, is an Intel Xeon E3-1245 V2 from 2012 (!) with regular SATA SSDs, where the BIOS boots from the MBR. What happens is that the same custom OS installation procedure doesn’t work for newer servers, which use UEFI and are powered by NVMe disks. I tried multiple combinations of:
- Regular installation as if it were a BIOS-based server with SSD drives.
- Passing them as NVMe drives using
-drive file=/dev/nvme0n1
and-device nvme
. - UEFI installation with
-bios /usr/share/ovmf/OVMF.fd
, using the OVMF firmware. - Creating an EFI System Partition (ESP) both on and off a RAID-1 array.
- Using the default “entire disk” automatic partitioning from the Debian installer.
- Performing a regular installation with no RAID at all.
- Cloning this regular installation onto both disks to ensure either of them would boot.
And probably a few other combinations I don’t recall. What’s important is that none of them worked. Every time I checked the boot logs via KVM/IPMI, I got an error like:
rEFInd - Chain on hard drive failed. Next
. I kept wondering what could be missing, but more important than doing the installation exactly as I wanted was achieving the final goal. I didn’t want to install an unsupported OS. I only wanted a Debian installation with full disk encryption, which isn’t supported by the OVH web-based OS installer. That’s when I started looking into how to encrypt an existing Linux installation.There are guides like Encrypt an existing Debian 12 system with LUKS, which aren’t exactly wrong but are overly complicated and contain unnecessary steps. Every time I see a complicated guide, I wonder how to simplify the process. Starting from that and with the always-on-point instructions from the Arch Linux Wiki, I was able to encrypt an existing Debian installation.
Encrypting an Existing Debian System
The process goes like this:
Do a regular Debian 13 (Trixie) installation using the OVH web installer. The most important part is to keep
/boot
separate from other partitions and on RAID-1 if you’re using RAID. It will also create the ESP partition separately. The goal is to never need to touch these two partitions.After the installation is finished, log in to the new system and install the required packages to boot it. The
dropbear-initramfs
package is what allows us to unlock the encrypted root partition via SSH. For it to work, you need to set its ownauthorized_keys
file, since it won’t have access to anything on disk before unlocking.$ sudo apt install cryptsetup-initramfs dropbear-initramfs (...) $ sudo vim /etc/dropbear/initramfs/authorized_keys
Configure the server to boot in rescue mode and restart it. After logging in again, check the current partitions to identify the data partition to be encrypted.
# lsblk | grep -v nbd NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS nvme1n1 259:0 0 419.2G 0 disk ├─nvme1n1p1 259:1 0 511M 0 part ├─nvme1n1p2 259:2 0 1G 0 part │ └─md2 9:2 0 1022M 0 raid1 └─nvme1n1p3 259:3 0 417.7G 0 part └─md3 9:3 0 835.1G 0 raid0 nvme0n1 259:4 0 419.2G 0 disk ├─nvme0n1p1 259:5 0 511M 0 part ├─nvme0n1p2 259:6 0 1G 0 part │ └─md2 9:2 0 1022M 0 raid1 ├─nvme0n1p3 259:7 0 417.7G 0 part │ └─md3 9:3 0 835.1G 0 raid0 └─nvme0n1p4 259:8 0 2M 0 part
In this case, we are interested in
/dev/md3
, the root partition on top of RAID-0. Check the filesystem for errors so that other tools don’t complain about it later.# e2fsck -f /dev/md3 e2fsck 1.47.0 (5-Feb-2023) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information root: 31979/54730752 files (1.1% non-contiguous), 3972359/218920960 blocks
Now comes a very important part: shrink the filesystem but not the whole partition. The goal is to leave space for the LUKS header to be created at the end of the partition.
# resize2fs -p -M /dev/md3 resize2fs 1.47.0 (5-Feb-2023) Resizing the filesystem on /dev/md3 to 957980 (4k) blocks. Begin pass 2 (max = 512733) Relocating blocks XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Begin pass 3 (max = 6681) Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Begin pass 4 (max = 3388) Updating inode references XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX The filesystem on /dev/md3 is now 957980 (4k) blocks long.
Next, perform the actual encryption. This is also where you set the passphrase to unlock it. This will take time, depending on the disk speed and partition size, as the process rewrites everything, including unused/free space. In this case, it took a little over 25 minutes to encrypt the 835GB partition.
# cryptsetup reencrypt --encrypt --reduce-device-size 32M /dev/md3 WARNING! ======== This will overwrite data on LUKS2-temp-03ea47d5-415b-4622-9510-5b7340d6c557.new irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for LUKS2-temp-03ea47d5-415b-4622-9510-5b7340d6c557.new: Verify passphrase: Finished, time 25m20s, 835 GiB written, speed 562.6 MiB/s
When encryption finishes, open it as a regular device and expand the filesystem to fill the partition again. This will use all available space minus what was reserved for the LUKS header.
# cryptsetup open /dev/md3 md3_crypt Enter passphrase for /dev/md3: # resize2fs /dev/mapper/md3_crypt resize2fs 1.47.0 (5-Feb-2023) Resizing the filesystem on /dev/mapper/md3_crypt to 218916864 (4k) blocks. The filesystem on /dev/mapper/md3_crypt is now 218916864 (4k) blocks long.
Mount the device and update both
/etc/crypttab
and/etc/fstab
. The former should refer to the filesystem UUID, but the latter can just point to the/dev/mapper/md3_crypt
device, since it will use the name defined in crypttab.# mount /dev/mapper/md3_crypt /mnt/
# blkid | grep /dev/md3 /dev/md3: UUID="a9c05676-6aa5-4a7b-acc9-a5eca5de4fed" TYPE="crypto_LUKS"
# cat /mnt/etc/crypttab # <target name> <source device> <key file> <options> md3_crypt UUID=a9c05676-6aa5-4a7b-acc9-a5eca5de4fed none luks,discard
# cat /mnt/etc/fstab /dev/mapper/md3_crypt / ext4 defaults 0 1 UUID=5bc9cb1c-6607-429d-9219-3675ee773b12 /boot ext4 defaults 0 0 LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1
The last step is to bind-mount the system directories, plus the actual
/boot
, and update the initramfs again in the chroot. This is required for two reasons:- It needs to grab the SSH
authorized_keys
file defined earlier in this process. - It needs to be aware of the updated
crypttab
file, which tells it what to unlock during boot.
# mount --bind /dev/ /mnt/dev/ # mount --bind /proc/ /mnt/proc/ # mount --bind /sys/ /mnt/sys/ # mount /dev/md2 /mnt/boot/ # chroot /mnt/ # update-initramfs -u -k all update-initramfs: Generating /boot/initrd.img-6.12.43+deb13-amd64
After that, exit the chroot, reboot, and log in via SSH as
root
once your machine is online and responding to pings. Then, after runningcryptroot-unlock
and entering the proper passphrase, the machine will mount the encrypted device and proceed with the boot.To unlock root partition, and maybe others like swap, run `cryptroot-unlock`. BusyBox v1.37.0 (Debian 1:1.37.0-6+b3) built-in shell (ash) Enter 'help' for a list of built-in commands. ~ # cryptroot-unlock Please unlock disk md3_crypt: cryptsetup: md3_crypt set up successfully
Finally, the machine should now be online and running with full disk encryption, except for
/boot
and the ESP partition.$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS nvme1n1 259:0 0 419.2G 0 disk ├─nvme1n1p1 259:1 0 511M 0 part /boot/efi ├─nvme1n1p2 259:2 0 1G 0 part │ └─md2 9:2 0 1022M 0 raid1 /boot └─nvme1n1p3 259:3 0 417.7G 0 part └─md3 9:3 0 835.1G 0 raid0 └─md3_crypt 253:0 0 835.1G 0 crypt / nvme0n1 259:4 0 419.2G 0 disk ├─nvme0n1p1 259:5 0 511M 0 part ├─nvme0n1p2 259:6 0 1G 0 part │ └─md2 9:2 0 1022M 0 raid1 /boot ├─nvme0n1p3 259:7 0 417.7G 0 part │ └─md3 9:3 0 835.1G 0 raid0 │ └─md3_crypt 253:0 0 835.1G 0 crypt / └─nvme0n1p4 259:8 0 2M 0 part
The process may be a bit more involved than installing the OS from scratch using built-in encryption options. Still, it’s not too complex, provided you don’t skip any steps. It’s definitely better than running a machine outside of your physical reach that stores everything in plain text.
-
Ad-blocking in a Manifest V3 world
I started receiving warnings on Chrome about uBlock Origin, stating, “This extension may soon no longer be supported”, after I set it up on a new computer nearly three months ago. These warnings are related to the ongoing deprecation of Manifest V2, which has been going on for a few years and is expected to be finalized by June next year. I can’t even imagine using the internet without an ad blocker, so I decided to explore the current alternatives.
To be honest, the process was a bit daunting. Some articles suggest that there’s no hope unless you switch to a browser supported by a company that doesn’t rely on ads - something that doesn’t really exist. In situations like this, I tend to try the simplest, most straightforward solution first before diving into more complex, over-the-top options. In this case, I decided to switch to the Manifest V3-based extension from the same developers: uBlock Origin Lite.
The experience was actually better than I expected. Even in “Basic” mode, which doesn’t require permission to read or change data on all sites, it worked quite well with the default filters. I did notice some empty ad boxes, as the page layouts aren’t reworked, but that’s something I’m already used to when visiting sites on mobile with AdGuard (doing the block via DNS). No need to change to the “Optimal” mode for now.
In fact, enabling the “Overlay Notices” and “Other Annoyances” filter lists made the experience even more pleasant than before. There are no more “please disable your ad-block” or “please donate to this site using Google” overlays to disrupt my browsing. So while the transition from V2 to V3 may have been a hassle for extension developers, as an end-user, I can’t say I’m unhappy with it. I’m still experiencing the (mostly) ad-free internet I was used to.
-
The cloud won, again
Since 2018 (and again in 2020), I’ve been writing about how defaulting to cloud-based solutions instead of self-hosting everything has changed my life for the better. Even knowing that for years, I still made the wrong decision to self-host a service I needed and almost doubled down on doing it again. This was until I stopped and figured out an easier way to achieve the same goal.
I’ve been a Dropbox user for nearly 15 years. It really simplified my approach to backups for personal stuff: a cloud-synced folder on my laptop where I put everything that’s not already on a cloud service. Accidentally deleted something? I can just go to their web app and restore it. The problem is that I don’t want it offering a read-write version of this folder on every device I have. Sometimes I just need a temporary folder to drop a screenshot from my Windows gaming machine so I can access it from my phone.
Resilio Sync (previously known as BitTorrent Sync) is what I was using for that. It has a few problems, including being super slow even after configuring everything possible to bypass its relay servers (spoiler: it doesn’t). Plus, it doesn’t have cloud-backed storage, so I ran an instance of it on a server to have an always-on copy of the files there. Not exactly a drop-in replacement for Dropbox, but it was still useful until I realized I was completely unhappy with its performance.
Things were reaching a point where I was considering self-hosting Nextcloud (fork of the original ownCloud) just for its file-syncing feature or even writing my own cloud-folder synchronization tool backed by S3-compatible storage. That’s when it clicked: I realized I don’t need real-time syncing for the simple use case of easily sharing single files from a computer to my phone. I just needed a way to access a Cloudflare R2 bucket from a mobile app.
After looking around, asking ChatGPT and Perplexity, I settled on S3 Files. I can upload a file from Windows using WinSCP or from a macOS/Linux terminal using s3cmd. Each machine uses a fine-grained access key I can revoke if needed. The S3 API is ubiquitous. I just needed a mobile app to access it when I’m away from my computer. It offered me the ease, speed, availability, and robustness of the cloud, which are miles ahead compared to self-hosting anything.
-
Introducing Myhro Notes
Writing is hard. I’ve been writing in this blog since 2011, when posts were still written in Portuguese - those have since been deleted. It hasn’t gotten any easier after nearly 15 years. Posts still take hours to be written, proofread and double-checked before being published. On top of that, much of the energy I have to write longer chunks of text is spent on project documentation, issues and pull request descriptions, both in and out of work-related duties. The result is that posts on this blog are becoming rare - this is the first one for the entire year.
But it doesn’t need to always be like that. Not every piece of advice or knowledge I’d like to share needs to be in a long blog article format. Sometimes I discover something useful, either through my own exploration or based on someone else’s experience, and I share it with close friends with a small comment of a few sentences. This happens on Slack workspaces, WhatsApp groups or even in direct messages. In the end, I thought: what if I write this for the wider internet and share the link with the same people, instead of writing directly to them?
Based on the concept of “blogmarks”, where small blog posts are used to share links that, in a distant past, would be bookmarked to del.icio.us, I started Myhro Notes. There’s usually one or two short paragraphs adding context, explaining why the link is interesting. They are listed by date, like a blog, but only the title is visible on the home page. The posts themselves will eventually be available on search engines. I expect my future self to be one of its users looking for content posted there.
-
Managing Podman containers with systemd
Since my early days in programming, I’ve always worried about isolated development environments. Of course, this wasn’t relevant when developing C applications with no external dependencies in Code::Blocks, but it soon became a necessity when I had to deal with Python packages through virtualenv. The same happened with Ruby versions using rbenv. Later I settled on asdf to do that with multiple Go/Node.js versions, which basically solved the problem for good for many programming languages and even some CLI tools that are sensible to versioning, like
kubectl
.But dealing with multiple runtimes or packages is just one piece of the equation in the grand schema of cleanly handling dependencies. Sometimes you have to worry about the versions of the external services a project makes use of, like a database or cache system. This is also a solved problem since Docker came into the picture over 10 years ago. I remember that the Docker Compose mantra when it launched (still called Fig) was: “no more installing Postgres on your laptop!”. This is just a bit more complicated when you don’t use the original Docker implementation, but another container management system, like Podman.
Podman offers several advantages over Docker. It can run containers without requiring
root
access; it doesn’t depend on a service daemon running all the time; and it doesn’t require a paid subscription depending on the usage. It’s a simpler tool, with an almost 1:1 compatible UI overall. The main difference is that it doesn’t seamlessly handle containers with--restart
flags. I mean, of course it does restart containers when their processes are interrupted, but they won’t be brought up after a host reboot - which tends to happen from time to time in a workstation.When looking into how to solve this problem, I realised that the
podman generate
command can create systemd service unit files. So instead of tinkering about how to integrate the two tools, figuring the file syntax and functionality, it’s possible to just create a new systemd service of the desired container as if it were any other program/process. And the best part is that we can still do that withoutroot
, thanks to systemd user services.$ podman create --name redis -p 6379:6379 docker.io/redis:7 Trying to pull docker.io/library/redis:7... (...) $ mkdir -p ~/.config/systemd/user/ $ podman generate systemd --name redis > ~/.config/systemd/user/redis.service
To increase the service’s reliability, it’s preferable to drop the
PIDFile
line from the configuration file. It typically looks like:PIDFile=/run/user/1000/containers/vfs-containers/c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb/userdata/conmon.pid
This ensures that even if the container is recreated, like when updating its image, systemd won’t be referencing its older ID, as it will only care about its name. This can be done programmatically with
sed
:$ sed -i '/PIDFile/d' ~/.config/systemd/user/redis.service
The generated file should be similar to:
# container-redis.service # autogenerated by Podman 4.3.1 # Sat Dec 23 17:18:01 -03 2023 [Unit] Description=Podman container-redis.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start redis ExecStop=/usr/bin/podman stop \ -t 10 redis ExecStopPost=/usr/bin/podman stop \ -t 10 redis Type=forking [Install] WantedBy=default.target
The final step consists in starting the service and enabling it to launch on boot:
$ systemctl --user start redis $ systemctl --user enable redis Created symlink /home/myhro/.config/systemd/user/default.target.wants/redis.service → /home/myhro/.config/systemd/user/redis.service. $ systemctl --user status redis ● redis.service - Podman container-redis.service Loaded: loaded (/home/myhro/.config/systemd/user/redis.service; enabled; preset: enabled) Active: active (running) since Sat 2023-12-23 17:25:40 -03; 13s ago Docs: man:podman-generate-systemd(1) Tasks: 17 (limit: 37077) Memory: 11.1M CPU: 60ms CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/redis.service ├─46851 /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/netns-102ff957-157c-adcb-bd4a-45e7e0d2a50f tap0 ├─46853 rootlessport ├─46859 rootlessport-child └─46869 /usr/bin/conmon --api-version 1 -c c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -u c1b1c3e5dba5368c29ada52a638378e5fec74e1a62e913919528b9c3846c14bb -r /usr/bin/crun -b /home/myhro/.local/share/(...) Dec 23 17:25:40 leptok redis[46869]: 1:C 23 Dec 2023 20:25:40.071 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * monotonic clock: POSIX clock_gettime Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Running mode=standalone, port=6379. Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Server initialized Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Loading RDB produced by version 7.2.3 Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB age 18 seconds Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * RDB memory usage when created 0.83 Mb Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Done loading RDB, keys loaded: 0, keys expired: 0. Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * DB loaded from disk: 0.000 seconds Dec 23 17:25:40 leptok redis[46869]: 1:M 23 Dec 2023 20:25:40.072 * Ready to accept connections tcp
In summary, I quite liked how easy it was to leverage the strengths of both the Podman and systemd in their integration. Being able to do that in a rootless way is definitely a huge plus. Before doing that, I always believed that managing Linux services was a root-only thing. But now that I think about it, I realize that when Docker was the only game in town, managing containers also required elevated privileges. I’m glad that we are moving away from this idea, piece by piece.