Posts
-
Migrating "Tem Água Hoje?" to Cloudflare Workers
Last year I mentioned how some small dynamic parts of my main website - Myhro.info - were migrated to Cloudflare Workers. This time I did a similar move, but instead of migrating just a couple parts, I moved a whole webapp, including both backend and frontend, to there. Before mentioning how much better Cloudflare Workers are one and a half years later - and they already rocked in the first time! - let’s go through all the iterations “Tem Água Hoje?”, a website created to better inform people about water distribution restrictions, went through its lifetime.
Debut
The first iteration of the website came in form of an all-JavaScript application running in the browser. Literally everything was JavaScript in one form or another: the frontend, parser and even the database itself was structured in a JSON format. This happened for two reasons: first, I wanted to practice a bit of JavaScript. The second reason - and maybe the most import one - is that in this way, there was no backend whatsoever, no server-side part which I would have to worry about. The whole webapp was served as a static site straight from S3, which absurdly simplified its deployment and maintenance.
This version was good enough for the start. It was a bit annoying to generate the database dump, as I had to manually go to the browser and save it as a file, but as the water restrictions were lifted less than two months after starting the project, I wasn’t worried about it anymore. Then in the beginning of November this year, it was announced that water restrictions would be again in place for the next months. I had to decide to either just feed the new data to the webapp or rewrite a good chunk of it.
Component decoupling
Being a full-time Go programmer for a while made me biased towards rewriting the parser in Go and separating the actual logic about the water availability from the frontend, moving it to a real backend. With a bit of Gin - my favourite web framework - and SQLite, I was able to offer a REST API, powered by a small-but-full-featured relational database. The result was nice and delightful to build. All of that while teaching my nephew, who is studying Informatics, about all these components and how they play along together at the same time - a win-win for both him and me.
I really liked the result: a vanilla JavaScript frontend, fetching information from a Go API, backed by SQLite. There was just one problem: I would still have to deploy, host, maintain and care about the backend availability - otherwise the whole webapp would become useless. It was hosted in a server that I use as a developer workstation from time to time, but that was too fragile. I contemplated deploying it to a Kubernetes cluster, but that would be too much. I had to find a better and easier - preferable cheaper - alternative.
Workers to the rescue
As I mentioned, Cloudflare Workers have evolved considerably after its launch. I’ve been following the new features being offered, but wasn’t playing with them - something which bothered me a bit, as I’m in love for the platform since day one. This seemed like a perfect opportunity for me to try new things out, like their key-value store, Workers KV. By being a somewhat limited storage solution, as literally only string keys and values are available, I had to come up with a new modeling for the database, as no relational features like columns or queries would be available.
To be honest, I actually liked the limitations. There’s beauty in how simple the environment is, where one can use basically one language - JavaScript, if you don’t consider the WebAssembly option - and what is almost literally a hash table as storage layer. In the end, that actually simplified the logic to check for water interruption intervals, without forcing us to get back to the first JSON-based database format - which wouldn’t be supported anyway, as it wasn’t based only in strings.
Bonus points and conclusion
One interesting thing is that Cloudflare Workers can now also host static files in the form of Cloudflare Workers Sites - although not literally. What they do is to offer some logic to store file contents as strings in the KV store and fetch them from HTTP requests. A really clever usage of two simple features that can be used together to form a new one. By also deploying the frontend to Cloudflare Workers, S3 wasn’t needed anymore at all, simplifying the infrastructure and deployment procedures even further.
So, now we have temagua.myhro.info in a much more simple, faster, resilient - and also cheap - platform. I don’t have to care about it being up and running all the time - there are way more competent Engineers at Cloudflare doing that for me. And, by being free from the maintenance hassle, I was able to invest time in much more interesting things, like porting the frontend to React, something I did for the first time in my life.
Keep rocking, Cloudflare!
-
Local DNS-over-TLS (DoT) forwarder with CoreDNS
The first time I heard about DNS-over-TLS (DoT) was about a year ago, when Cloudflare launched their
1.1.1.1
public resolver. It immediately appeared to be a more natural successor to regular plain-text DNS than DNS-over-HTTPS (DoH). The problem is that, back then, it was not so easy to use. Pages like the client list in the DNS Privacy Project mention lengthy configuration options for forwarders like Bind/Knot DNS/Unbound, which aren’t exactly known for being user-friendly. What I wanted was something like Simple DNSCrypt on Windows, but for Linux/macOS and supporting DoT instead of DNSCrypt, which is a different protocol.After a while I realized that CoreDNS supported DoT in its forward plugin. By then, the only use-case I knew for CoreDNS was that it is commonly used for service discovery inside Kubernetes clusters. Starting from this point, I began investigating how hard would be to use it on my laptop, running macOS, and a couple remote servers that I use as workstations, running Linux. A DoT proxy/forwarder that supported caching seemed exactly what I was looking for.
The architecture goes like:
Applications | 127.0.0.1:53 (forwarder, plain text) | 1.1.1.1:853 (upstream, TLS-encrypted)
And the needed configuration is, literally, four lines:
. { cache forward . tls://1.1.1.1 tls://1.0.0.1 }
In this case, CoreDNS will forward all (
.
) DNS queries to1.1.1.1
and1.0.0.1
over TLS, load-balancing between them. It will also cache the responses, respecting their time-to-live (TTL), answering repeated queries in sub-millisecond latency.Understanding how simple the needed CoreDNS configuration is, the next step is to install it and configure its service to start on boot. The problem is that, being a relatively new project, it isn’t available on most package managers:
$ brew search coredns No formula or cask found for "coredns". $ brew info coredns Error: No available formula with the name "coredns"
$ apt search coredns Sorting... Done Full Text Search... Done $ apt show coredns N: Unable to locate package coredns E: No packages found
But there’s a bright side: CoreDNS is a Go project. Having all its dependencies statically-linked, installing it is a matter of downloading the corresponding release for the target Operating System/Architecture in the project GitHub release page - in exchange for a slightly large binary (
> 40MB
). Putting it under/usr/local/bin/coredns
, it can be easily configured to be treated as a service on both macOS and Linux (systemd):macOS:
/Library/LaunchDaemons/coredns.plist
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>coredns</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/coredns</string> <string>-conf</string> <string>/usr/local/etc/coredns/coredns.conf</string> </array> <key>RunAtLoad</key> <true/> <key>KeepAlive</key> <true/> </dict> </plist>
Linux (systemd):
/lib/systemd/system/coredns.service
[Unit] Description=CoreDNS After=network-online.target Wants=network-online.target [Service] ExecStart=/usr/local/bin/coredns -conf /etc/coredns.conf Type=simple [Install] WantedBy=multi-user.target
After starting it with
sudo launchctl load -w /Library/LaunchDaemons/coredns.plist
on macOS, orsudo service coredns start
on Linux, we can confirm it’s working by analysing the DNS packets that go over the network. It’s a matter of creating atcpdump
session on one terminal:$ sudo tcpdump -n 'host 1.1.1.1 or host 1.0.0.1' tcpdump: data link type PKTAP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on pktap, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes
And running
dig
on another:$ dig blog.myhro.info @1.1.1.1 (...) $ dig blog.myhro.info @127.0.0.1 (...)
The first request, done straight to the upstream server, can be literally seen on the wire together with its response:
15:29:43.200468 IP 192.168.0.2.56184 > 1.1.1.1.53: 23554+ [1au] A? blog.myhro.info. (56) 15:29:43.215999 IP 1.1.1.1.53 > 192.168.0.2.56184: 23554$ 2/0/1 A 104.27.179.51, A 104.27.178.51 (76)
While the second one, which goes through CoreDNS, can’t be sniffed:
15:33:41.238099 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [S], seq 2286465987, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 464696127 ecr 0,sackOK,eol], length 0 15:33:41.251513 IP 1.1.1.1.853 > 192.168.0.2.55911: Flags [S.], seq 1040019771, ack 2286465988, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 10], length 0 15:33:41.251607 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [.], ack 1, win 4096, length 0 15:33:41.251863 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [P.], seq 1:192, ack 1, win 4096, length 191 15:33:41.267167 IP 1.1.1.1.853 > 192.168.0.2.55911: Flags [.], ack 192, win 30, length 0 15:33:41.267608 IP 1.1.1.1.853 > 192.168.0.2.55911: Flags [.], seq 1:1461, ack 192, win 30, length 1460 15:33:41.268328 IP 1.1.1.1.853 > 192.168.0.2.55911: Flags [P.], seq 1461:2667, ack 192, win 30, length 1206 15:33:41.268392 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [.], ack 2667, win 4077, length 0 15:33:41.284748 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [P.], seq 192:285, ack 2667, win 4096, length 93 15:33:41.297023 IP 1.1.1.1.853 > 192.168.0.2.55911: Flags [P.], seq 2667:2718, ack 285, win 30, length 51 15:33:41.297104 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [.], ack 2718, win 4095, length 0 15:33:41.297403 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [P.], seq 285:316, ack 2718, win 4096, length 31 15:33:41.297495 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [P.], seq 316:401, ack 2718, win 4096, length 85 15:33:41.308614 IP 1.1.1.1.853 > 192.168.0.2.55911: Flags [.], ack 401, win 30, length 0 15:33:41.311106 IP 1.1.1.1.853 > 192.168.0.2.55911: Flags [P.], seq 2718:2877, ack 401, win 30, length 159 15:33:41.311178 IP 192.168.0.2.55911 > 1.1.1.1.853: Flags [.], ack 2877, win 4093, length 0
Now it’s a matter of configuring the system to use
127.0.0.1
as the DNS server.P.S.: it’s important to notice that using DNS-over-TLS together with regular HTTPS connections is still not enough to guarantee total browsing privacy. The target hostname is still sent in plain text during the TLS handshake. That will change when the encrypted SNI extension to the TLS 1.3 protocol becomes widely available. Given this particularity, Cloudflare offers a page to check how secure/private your browsing experience is.
Update: As pointed out by Miek Gieben, CoreDNS author, the manual installation steps mentioned here can be avoided. The
coredns/deployment
repository contains utilities for deploying it on different platforms, like macOS and Debian-like systems. -
Quick and easy VPNs with WireGuard
WireGuard is the new kid on the block in the world of VPNs. It has been receiving a lot of attention lately, especially after Linus Torvalds himself praised the project last month, resulting in in-depth guides about its characteristics being published. The problem is that practical guides about its setup, including the official one, doesn’t show how quick and easy it is to do that. They are full of lengthy, complex and unneeded commands, when everything that is needed are simple configuration files.
This guide won’t describe how to actually install WireGuard, as this is thoroughly covered by the official documentation for every supported platform. It consists of a loadable kernel module that allows virtual WireGuard network interfaces to be created. In here, an EC2 instance located in Ireland and a virtual machine (based on Vagrant/VirtualBox) in Germany, both running Ubuntu, will be connected.
The first step is to generate a pair of keys for every machine. WireGuard authentication system doesn’t rely passwords or certificates that includes hard-to-maintain Certification Authorities (CAs). Everything is done using private/public keys, like SSH authentication:
$ wg genkey | tee privatekey | wg pubkey > publickey $ ls -lh total 8.0K -rw-rw-r-- 1 ubuntu ubuntu 45 Sep 15 14:31 privatekey -rw-rw-r-- 1 ubuntu ubuntu 45 Sep 15 14:31 publickey
In the server, the
/etc/wireguard/wg0.conf
configuration file will look like:[Interface] PrivateKey = 4MtNd3vq/Zb5tc8VgoigLyuONWoCQmnzLKFNuSYLiFY= Address = 192.168.255.1/24 ListenPort = 51820 PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; sysctl net.ipv4.ip_forward=1 PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; sysctl net.ipv4.ip_forward=0 [Peer] PublicKey = 0+/w1i901TEFRmEcUECqWab/nwmq0dZLehMzSOKUo04= AllowedIPs = 192.168.255.2/32
Here’s an explanation of its fields:
PrivateKey
is the server private key. It proves that the server is who it says it is, and the same will be valid for the clients on the other end. One will be able to validate the identity of the other.Address
is the IP and network mask for the VPN network.ListenPort
tells in which UDP port the server will listen for connections.PostUp
are firewall rules and system commands that are needed for the server to act as a gateway, forwarding all network traffic.PostDown
will disable them when the VPN is deactivated.eth0
is the name of the main network interface, which can be something different likeens5
if systemd’s Predictable Network Interface Names are being used.PublicKey
andAllowedIPs
defines which peers can connect to this server through a combination of IPs/key pairs. It’s important to notice that the IPs defined here are within the VPN network range. Those are not the actual IPs which the client will use to connect to the server over the internet.
The client will also have a
/etc/wireguard/wg0.conf
configuration file, but it will be a little bit different:[Interface] PrivateKey = yDZjYQwYdsgDmySbUcR0X7b+rdwfZ91rFYxz6m/NT08= Address = 192.168.255.2/24 [Peer] PublicKey = e1HJ0ed/lUmCDRUGjCwFZ9Qm2Lt14jNE77TKXyIS1yk= AllowedIPs = 0.0.0.0/0 Endpoint = ec2-34-253-52-138.eu-west-1.compute.amazonaws.com:51820
The
PrivateKey
andAddress
fields here have the same meaning as in the server. The difference is that theInterface
section won’t contain the server parts, like the listening ports and firewall commands. ThePeer
section contains the following fields:PublicKey
is the public key of the server which the client will connect to.AllowedIPs
is interesting here. It means the networks in which their traffic will be forwarded to the server.0.0.0.0/0
means that all traffic, including connections that goes to the internet, will use the server as a gateway. Using the same VPN network, like192.168.255.0/24
, would create a P2P VPN, where the client and server will be able to reach each other, but any other traffic (e.g. to the internet) wouldn’t be forwarded through this connection.Endpoint
is the hostname or IP and port which the client will use to reach the server in order to establish the VPN connection.
With both machines configured, the VPN interface can be enabled on the server:
[ubuntu@ip-172-31-14-254:~]$ sudo wg-quick up wg0 [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip address add 192.168.255.1/24 dev wg0 [#] ip link set mtu 8921 dev wg0 [#] ip link set wg0 up [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; sysctl net.ipv4.ip_forward=1 net.ipv4.ip_forward = 1
If a message like the following is shown in this step:
Warning: `/etc/wireguard/wg0.conf' is world accessible
This means that the configuration file permissions are too broad - and they shouldn’t, as there’s a private key in there. This can be fixed with
sudo chmod 600 /etc/wireguard/wg0.conf
.The command
sudo wg
shows the VPN status:[ubuntu@ip-172-31-14-254:~]$ sudo wg interface: wg0 public key: e1HJ0ed/lUmCDRUGjCwFZ9Qm2Lt14jNE77TKXyIS1yk= private key: (hidden) listening port: 51820 peer: 0+/w1i901TEFRmEcUECqWab/nwmq0dZLehMzSOKUo04= allowed ips: 192.168.255.2/32
From this point, the VPN can be enabled on the client using the same commands:
[vagrant@ubuntu-xenial:~]$ sudo wg-quick up wg0 [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip address add 192.168.255.2/24 dev wg0 [#] ip link set mtu 1420 dev wg0 [#] ip link set wg0 up [#] wg set wg0 fwmark 51820 [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820 [#] ip -4 rule add not fwmark 51820 table 51820 [#] ip -4 rule add table main suppress_prefixlength 0 [vagrant@ubuntu-xenial:~]$ sudo wg interface: wg0 public key: 0+/w1i901TEFRmEcUECqWab/nwmq0dZLehMzSOKUo04= private key: (hidden) listening port: 47603 fwmark: 0xca6c peer: e1HJ0ed/lUmCDRUGjCwFZ9Qm2Lt14jNE77TKXyIS1yk= endpoint: 34.253.52.138:51820 allowed ips: 0.0.0.0/0 latest handshake: 1 second ago transfer: 92 B received, 292 B sent [vagrant@ubuntu-xenial:~]$ curl https://myhro.info/ip curl/7.47.0 34.253.52.138 IE
The most impressive part is how quickly the connection is done. There are no multi-second handshakes like any other VPN solution and it can be used instantly. After playing with it, it becomes easier to understand why WireGuard is attracting so much attention. Specially because it’s so unobtrusive that one can use it without even realizing it’s turned on.
-
How to mock Go methods
Warning: this post wouldn’t exist if it wasn’t the help of my long-time friend, previous university and work colleague, Fernando Matos. We discussed the possibilities for a few hours in order to figure the following implementations. I hope we can work together on a daily basis again in the future.
Imagine the following Go code:
main.go
package main import "time" type Client struct { timeout time.Duration } func New() Client { return Client{timeout: 1 * time.Second} } func (c *Client) Fetch() string { time.Sleep(c.timeout) return "actual Fetch" } func main() { c := New() c.Fetch() }
In this example, the
Fetch()
method is just sleeping for a pre-defined duration, but imagine that it is a real external API call, involving a slow and expensive network request. How can we test that?main_test.go
package main import "testing" func TestFetch(t *testing.T) { c := New() r := c.Fetch() t.Fatal(r) }
If the actual
Fetch()
implementation is called, the test execution will take too long:$ go test --- FAIL: TestFetch (1.00s) main_test.go:8: actual Fetch FAIL exit status 1 FAIL _/Users/myhro/tmp 1.009s
No one is going to wait a few seconds in each test run where this method is called a couple times. A naive approach to circumvent that would be trying to replace this method with another one with the same name that would avoid the slow operation:
func (c *Client) Fetch() string { return "mocked Fetch" }
But in Go, this isn’t possible:
./main_test.go:5:6: (*Client).Fetch redeclared in this block previous declaration at ./main.go:13:6
So we have to look for another solution, like the delegation design pattern. Instead of having the
Fetch()
method do what it is supposed to do, it delegates its responsibility to an encapsulated object.main.go
package main import "time" type Client struct { delegate clientDelegate timeout time.Duration } type clientDelegate interface { delegatedFetch(time.Duration) string } func (c *Client) delegatedFetch(t time.Duration) string { time.Sleep(t) return "actual Fetch" } func New() Client { n := Client{ delegate: &Client{}, timeout: 1 * time.Second, } return n } func (c *Client) Fetch() string { return c.delegate.delegatedFetch(c.timeout) } func main() { c := New() c.Fetch() }
This way, we can replace the implementation of this inner object without having to override the entire object that is being tested:
main_test.go
package main import ( "testing" "time" ) type fakeClient struct{} func (c *fakeClient) delegatedFetch(t time.Duration) string { return "mocked Fetch" } func TestFetch(t *testing.T) { c := New() c.delegate = &fakeClient{} r := c.Fetch() t.Fatal(r) }
Now the mocked
Fetch()
is called and the test execution finishes in no time:$ go test --- FAIL: TestFetch (0.00s) main_test.go:18: mocked Fetch FAIL exit status 1 FAIL _/Users/myhro/tmp 0.006s
So the delegation pattern approach works, but there are a few drawbacks:
- It needs an interface that is going to be used only by the methods that are supposed to be mocked;
- The inner object can’t see its parent attributes, so they have to be passed as arguments;
- This looks too verbose and there should probably be a shorter/simpler way to that.
One cool thing about Go functions is that they can be treated as types, so they can be used as struct members or passed as arguments to other functions. This allows us to do things like:
main.go
package main import "time" type fetchType func(time.Duration) string type Client struct { fetchImp fetchType timeout time.Duration } func sleepFetch(t time.Duration) string { time.Sleep(t) return "actual Fetch" } func New() Client { n := Client{ fetchImp: sleepFetch, timeout: 1 * time.Second, } return n } func (c *Client) Fetch() string { return c.fetchImp(c.timeout) } func main() { c := New() c.Fetch() }
And to replace the
Fetch()
implementation when testing:main_test.go
package main import ( "testing" "time" ) func FakeFetch(t time.Duration) string { return "mocked Fetch" } func TestFetch(t *testing.T) { c := New() c.fetchImp = FakeFetch r := c.Fetch() t.Fatal(r) }
Achieving the same results:
$ go test --- FAIL: TestFetch (0.00s) main_test.go:16: mocked Fetch FAIL exit status 1 FAIL _/Users/myhro/tmp 0.007s
It’s interesting to notice that the
FetchType
declaration itself can be omitted, resulting in:type Client struct { fetchImp func(time.Duration) string timeout time.Duration }
Thus avoiding the creation of a dummy
interface
,type
orstruct
only for mocking it later.Updates:
- Sandor Szücs pointed out that we have to care about not unintentionally exporting fake/internal methods or structs. Thanks!
-
How I finally migrated my whole website to the cloud
This is going to be a tale of blood, sweat and tears, describing what I experienced over multiple years while trying to get rid of maintaining my own servers to host my website. This battle lasted for, literally, over a decade, until I was finally able to migrate every piece of code that comprises it (not just this blog). Since this week, it runs on a cloud infrastructure over multiple providers where I have little to no worries about its maintenance, offers more reliability than I actually need and it’s also pretty cheap.
The
myhro.info
domain was registered in April 2007. Initially I had no real intentions of hosting anything on top of it, as it was more like “oh, now I have my first domain!”. In the first years, the web hosting was provided by 000webhost. It was free, but I also had no guarantee about its availability and faced some downtime every once in a while. This continued until, after finding some interesting offers on Low End Box, I migrated it to its own VPS server by the end of 2010. I remember the year because it was my first one at the Information Systems course, around the same time I got my first part-time System Administrator job. The experience I got maintaining my own Linux + Apache + PHP + MySQL (LAMP) server was crucial in the beginning of my professional career and some learnings from the time are still useful to me these days.In April 2011 this blog was started on a self-hosted WordPress installation, in the same previously mentioned server. At first I had almost no service to really care about its availability and probably the only exception was the Myhro.info URL Shortener (hosted under the
myhro.net
domain). The problem is that, after starting a blog, I had to worry about it being online at all times - otherwise people would not be able to read what I spent hours writing.Maintaining your own WordPress instance is not an easy job, even for small blogs. I spent endless hours fighting comment spam and keeping the installation secure and up-to-date. It was such a hassle that in less than two years, it was migrated to OctoPress, a static site generator in blog format, in the beginning of 2013. Publishing posts was now a matter of copying HTML files over
rsync
, but I still had to maintain a HTTP server for it. That’s why this blog was moved to GitHub Pages in 2014 and to Jekyll in 2015 and is still hosted there currently. Now I was free from maintaining a web server for it and this became a GitHub problem. At the same time the blog was migrated to Jekyll, its HTTPS support was re-enabled using Cloudflare (something that was lost in the GitHub Pages migration).Migrate
blog.myhro.info
to GitHub Pages + Cloudflare was marvelous and I haven’t worried about its maintenance ever since - not to mention that it also didn’t cost me a cent to do it. Now I had to take care of other parts of my website that required server-side scripts, like myhro.info/ip: a page that shows visitor’s IP address and user agent in a simple plain text format. It’s really handy to use it in the command line withcurl
and, in my experience, faster than using ifconfig.me. The main issue with this service is that it was written in PHP.I don’t remember exactly when was my first try of migrating the IP page to a cloud service, but it was probably between 2015 and 2016, when I tried AWS Lambda, rewriting it in a supported language. This didn’t worked, as to make a Lambda function available via HTTP, one have to use the Amazon API Gateway and it didn’t offered the possibility of using a simple endpoint like
myhro.info/ip
. I think this can be achieved with Amazon CloudFront, routing a specific path to a different origin, but it seemed too much work (and involved the usage of a bunch of different services) to achieve something that is really simple in nature. Trying to do the same using Google Cloud Functions yielded a similar experience.After these frustrating experiences, I stopped looking for alternatives. Maybe the technology to host a few dynamic pages (in this case, only one) for a website which has most static content wasn’t there yet. Then, after two hopeless years, I read the announcement of Cloudflare Workers which seemed exactly what I wanted: run code on a cloud service to answer specific requests. Finally, after reaching open beta and general availability, in 2018 I could truly and easily deploy small “serverless” applications tightly integrated to an already existing website. For that I just had to learn a little bit of JavaScript.
It took me years of waiting and a few hours in a weekend to write JavaScript replacements for the PHP and Python (in the end I also migrated heroku.myhro.info, a service that returns random Heroku-style names) implementations, but I had finally reached the Holy Grail. Now it was a matter of moving the static parts of the website to Amazon S3, which is quite straightforward. S3 doesn’t offer HTTPS connections for static websites hosted in there, but as I already used Cloudflare, this was a no-brainer.
So, Cloudflare Workers aren’t free (the minimum fee is $5/month), neither are they perfect. There are some serious limitations, like the “one worker per domain” restriction on non-Enterprise accounts, that can be a blocker for larger projects. But in this case, where I wanted to have a couple dynamic pages for a mostly static website, they fit perfectly. I’m also happy to pay an amount I consider reasonable for a service I’ve been using for free for years. Looking at the company recent innovations, they may become even better in the future.