Hosting 10 apps in 5€ VPS with Kubernetes, Cloudflare, Flux, GitHub, Sops and Codex
Recently I moved all my random small projects to a private server. Shared resources, 4 VCPUs and only 8GB of memory. This goes surprisingly far for small apps, running in containers and using Postgres. I’m using a virtual server from Hetzner, but similar offerings are available everywhere.
This is a short description of tools I’ve used. I’m not going to give any specific setup guidelines as Internet is already full of those and the AI agents are pretty good at helping.
Ubuntu 24.04 as operating system
Something else would work equally well, but Ubuntu is something I’ve been running for quite a while (path was likely Slackware => Gentoo => CentOS => Ubuntu). For the servers I always use the long-term support versions.
microk8s as application server
Small and straightforward to install Kubernetes installation. One might think Kubernetes is huge and total overkill for this kind of setup. It’s not. And it covers many topics in “standard” way that you would need to solve anyways:
- Deploying and keeping apps running
- Managing secrets
- Handling logs
- Getting traffic to our services
- Job scheduling
Initially I just had a Docker Compose configuration with the various apps, but that become quite quickly bit annoying to manage.
Few of my apps are static websites. For hosting the files there are few volumes setup in Kubernetes. This is just stored on the local disk.
Flux for GitOps style management
One challenge with the personal projects is keeping configurations straight. I might work on something, then leave it for few years and come back. I wanted gitops style approach where all the configs are kept and git and I must go via git to get them deployed.
Flux manages stuff with the GitHub repo where the configs are stored. When stuff is pushed, it will notice that and synchronize changes with the Kubernetes config. If you have multiple Kubernetes setups, you can just make subfolders to the same GitHub repo.
Sops for storing secrets in version control
Plenty secrets are needed. Database passwords, encryption keys and so on. With sops you can check the stuff into version control with secret stuff encrypted. Thanks to magic of public key algorithms, this is easy to manage. The public key goes to the github repo, so on the development side you have that. The server keeps the private key.
Cloudflare for domains and routing
Cloudflare has quite generous free tiers. Everybody knows them for their reverse proxy and DDoS protection. They have tons of other stuff. Domain registrar side is good and works well. Then they also have a slick application platform with databases queues and other stuff.
In this case I’m using their Zero Trust stuff, cloudflared tunnels and the features for publishing apps via the tunnels. This way I don’t need to open any ports on the firewall for external access. The cloudflared tunnel daemon runs in the Kubernetes. On the Cloudflare control panel I can simply connect a public hostname to the local address, like https://httpdump.com maps to http://httpdump.httpdump.cluster.svc.local (the first is the service name on Kubernetes, the second one namespace)
graph LR
User["🌐 User"] -->|https://httpdump.com<br>https://dnsdig.net<br>hhttps://ndistilled.com| CF["Cloudflare"]
CF -->|Tunnel| CFD
subgraph VPS["Linux VPS"]
subgraph MK8S["microk8s / Kubernetes"]
CFD["cloudflared<br>daemon (pod)"]
subgraph NS1["httpdump namespace"]
HTTP["httpdump app<br>pod"]
VK1["valkey<br>pod"]
end
subgraph NS2["dnsdig namespace"]
DNS["dnsdig app<br>pod"]
VK2["valkey<br>pod"]
end
subgraph NS3["HN namespace"]
HN["HN distilled<br>nginx pod"]
HNJOB["HN cronjobs"]
VOL["volume"]
end
end
PG["Postgres"]
end
CFD --> HTTP
CFD --> DNS
CFD --> HN
HN --> VOL
HTTP --> VK1
DNS --> VK2
HNJOB --> VOL
HNJOB --> PG
Cloudflare gives couple of things. You of course get their caching, which can be handy. But it also provides one extra layer of security, as your webservers are not exposed directly to internets (but obviously you still need to keep them up-to-date!)
I also have some side projects running on Cloudflare workers, using the same Postgres database. This can be done using the Cloudflare Hyperdrive, which is a Postgres connection manager kind of tool. It connects your remote Postgres server to the apps running on Cloudflare Workers and it can do so via the Cloudflare tunnel.
By the way, with the cloudflare tunnel you can even host the apps from a computer running at home. Dynamic IP addresses or natting is not a problem since the tunnel daemon opens connection from the host to the Cloudflare network.
Postgres as database
Not so much related, but self hosted Postgres is the database of my choice. There’s plenty of commercial options, but for small side projects I’m trying to minimize even the small monthly running costs. Currently the Postgres is running outside my Kubernetes setup.
The stuff I’m running is not critical, but some backups are nice to have. Many ways to setup those. I’m just creating dumps of databases to local files with script running in crontab. Then on my desktop machine at home I have another script that on daily basis syncs the backup folder with rsync over ssh.
Another good option would be to use backup space from Hetzner. With around 3-4 euros you get 1TB of storage, accessible via ftp, sftp, samba, etc. If you have more complex backups needs, Borg is pretty neat tool for handling those on Linux, https://www.borgbackup.org (it does compression, deduplication, encryption etc.)
Valkey for in-memory storage
Valkey is a Redis clone. It is used as a simple queue solution for httpdump and cache for dnsdig. Since the stuff is not very critical, it’s not clustered (and anyways I have just a single server).
GitHub - version control, CI, package repo, DependaBot
Source code for all projects is hosted on GitHub. I’m using actions for continuous integration and for building the container images, which are then stored on GitHub package repository (once they start billing for the storage, need to start self-hosting that)
GitHub actions has pretty cool free tier, but they have their limits.. With this kind of setup it’s pretty easy to host your actions runner in the Kubernetes. Codex can setup things. It’s not the same thing as GitHub’s own. They are running the actions in lightweight virtual machines. With Kubernetes this would require more tweaking. This means some limitations when it comes to Docker.
Dependabot is tool from GitHub which monitors the dependencies of your projects and makes pull requests automatically when there are security related issues identified. If you have good automated tests on the CI pipeline, you can just approve them or even automate the approval and deployment.
Loki, Fluentbit, Grafana
Random stuff that takes care of collecting logs and makes them visible via GUI. Not really needed in most cases. I’m using this to sometimes check the logs from cronjobs (kubernetes scheduled pod)
Codex for AI ops
I initially build the setup manually, but now I’ve moved to AI ops. Not so interesting to tweak and config files and troubleshoot stuff, so I leave the to Codex. It can access the server also via ssh. This means it can create the configs, commit and push them and then log on to the server to verify everything is deployed correctly and if not, fix them.
If I want to install something new, I ask it along these lines:
Let’s create the configs for cloudflared tunnels. I want to run them in separate namespace. Prepare the files, including the secret templates. I’ll fill in the secrets and run sops on them.
Codex prepares the files. Then I get the secret, fill it in, run the sops encryption and let Codex to continue
Ok, everything is ready. Commit, push and check everything gets deployed
And actually Codex can quite independently build this of setup. I recently deployed another microk8s installation to home server. This was mostly done by Codex, working from my k8s config repo and with ssh access to the home server.
Next teps
Currently this is just a single server. For playing around, it would be interesting to add a second node. To get some real reliability, this would require moving the volumes somehwere else. The Hetzner storage box supports Samba and stuff, so that that could be an option.