Building a Homelab, Part 0

May 26, 2023 | 30 min. read

For the past few years, I've been running a pretty modest homelab. It's nothing super fancy, it's basically a couple of Raspberry Pis running some software that I find useful or interesting. However, in the past year I've kind of fallen down a rabbithole (those rabbitholes being r/homelab and r/selfhosted) and have caught the itch to really beef things up. As part of that project, I figured I'd document it here in a series of blogs posts. Before I do, though, I should probably illustrate where the homelab is at right now as a starting point.

Square One

Yeah, things are pretty messy at the moment. I'd like to make the excuse that I just moved and this is temporary, but this has been "temporary" since I moved into my apartment ~11 months ago. What you see before you is the lab in its entirety. In terms of hardware, it's comprised of:

  • 2 Raspberry Pi 3Bs
  • 1 Raspberry Pi 4B
  • 1 Synology NAS
  • 1 8-port TP-Link unmanaged Ethernet switch
  • Modem & Router, free from ISP

Now, for the software:


The NAS is pretty much stock. I was tempted to set up something like FreeNAS or TrueNAS and really build my network storage from scratch, but I decided to be lazy and I'm pretty happy with that decision. I currently have about 8TB of storage in each of the two drive bays, and I'm using RAID1 for redundancy. I'm not super knowledgable of storage systems so I doubt that's optimal, but it works for now.

This serves as storage for basically everything else running in the homelab - TV, movies, eBooks, etc.


I run Jellyfin, an open source media system, on the Raspberry Pi 4B (hostname ixion). I'm super happy with it so far; previous to finding Jellyfin, I was just mounting the NAS via NFS and playing video files on my machine locally via mpv. mpv is great, but there's a couple features that I was really missing compared to Netflix/Amazon Prime/etc.:

  • Syncing progress across devices. I love being able to watch an episode or two of a show on my desktop, and then be able to pick up right where I left off if I want to watch from my laptop on the couch.
  • Library and metadata management. It's just nice to be able to sort through all my shows/movies visually or by directory/genre/etc., rather than manually via command line.
  • Mobile and FireStick clients. mpv does have an Android app, but sifting through the filesystem on smartphones is always super clunky and confusing in my experience. Also, if I want to watch something with others, being able to use my FireStick to watch things on a TV is super handy.

I think the only hiccups I've experienced with Jellyfin have been surrounding fetching metadata. Sometimes, when a series doesn't have numerical season names (e.g., Demon Slayer doesn't really have a "Season 2", instead it's "Mugen Train Arc") it has a hard time recognizing it and the thumbnails/episode titles are missing or mismatched. That's super minor though, and the metadata auto-fetching feature is reliable and impressive enough that I'm okay with it performing poorly on an edge case or two.

GlueTUN & Deluge

I run both GlueTUN and Deluge on ixion, the same Pi that runs Jellyfin. I'm also pretty happy with these pieces of software. I've run Deluge locally for years now, but I wanted to be able to seed torrents 24/7 without having to run a VPN on my local machine.

That's where GlueTUN comes in handy (and why I mention it and Deluge in tandem). GlueTUN is a super cool Docker service that basically acts as a network interface for other containers, and routes all traffic through whatever VPN service you desire. The way it works is that GlueTUN and Deluge live in the same docker-compose.yml file, and Deluge's network_mode is set to service:gluetun. Simple as that! The network_mode feature of Docker is unfortunately not well documented, but I think the way that it works is by having the client container simply share all the network interfaces of theservice container. For extra safety, I configured Deluge to route all traffic through the tun0 interface. If GlueTUN goes down for whatever reason, the tun0 interface within the Deluge container will cease to function. By forcing all traffic through tun0, this prevents IP leaks just in the case the container decides to route through en0 or another interface. I don't know if that's possible (again, couldn't find much comprehensive info on the network_mode feature), but this method seems to be far more reliable than a VPN client "kill switch".

There's off-the-shelf configurations for pretty much any VPN service provider, but I've used Mullvad for years and been pretty happy with it I'm under no illusion that any commercial VPN is a bulletproof way to remain anonymous or protect privacy or whatever. I'm really only using it to mask my IP address while torrenting. Maybe there's a more efficient way to do that, but this seems like the most popular method after doing a brief survey. . I had some weird issues with setting up Mullvad via Wireguard (although I could have just been bungling the port forwarding stuff), so I had to settle for OpenVPN. Regardless, everything has been humming along nicely for a while now.

The only issue I've had with Deluge is performance. The Raspberry Pi is a tough little machine, but when I manage to find a really hot torrent with tons of seeders (like, a few thousand) it can start to choke once download rates get high enough. I've only had it happen a few times, but usually I start the download and notice that the web interface keeps crashing/hanging until the download is completed. I was originally running these services on one of the 3B Pis, but when I moved it over to the 4B (which has a couple more gigs of RAM) the issue seemed to abate a little. I'm not sure if the issue is related to the volume of network traffic or filesystem writes or both, but I have yet to investigate too deeply.


The final service that lives on ixion is Calibre, an eBook management system. I do pretty much all my reading on my Kindle Paperwhite these I always thought I was a "paper books only" type of person for all the usual reasons (love the feel of a physical book in my hands, having a physical library makes me feel like a fancy scholar, nice to see progress through a book physically, reading on a screen always hurt my eyes) but after moving and having to lug around boxes and boxes of books, I'm more than happy to read from a little plastic square weighing less than a pound. , so being able to store and manage all my eBooks in one place is really handy. I have the desktop client installed on a few devices, but I also run calibre-web so I can have a web interface without having to install the desktop client and mount the NAS on every device.

I also heavily utilize Calibre and calibre-web's Send To Kindle feature. I'm not sure why Amazon offers this functionality since they probably lose more money than they make, but every Kindle is assigned an email address that allows you to wirelessly upload books via sending it an email with any supported filetype as an attachment. I had to set up a weird forwarding system through my personal email I wish I could've just self-hosted an email server dedicated to send-to-kindle stuff, but I predict that Amazon would just give me the finger and blackhole all my mail. Mail seems like it's basically impossible to self-host these days., but I can use Calibre to send any eBook in my library to my Kindle with the click of a button from anywhere I can interact with calibre-web.

The only beef I have with Calibre so far is that I wish it acted more like Jellyfin, in that I could be able to designate different folders as different libraries or collapse book series while browsing (e.g., while browsing TV in Jellyfin I don't see every single episode at once - I see all my shows, then I click a show to see all its seasons, then I click a season to see all its episodes). An illustrative anecdote - when I set up Calibre for the first time, my girlfriend also imported all of her eBooks onto the server. Among her collection was a folder with every single Agatha Christie book ever written. Agatha Christie was, if anything, a prolific writer. I love a good murder mystery as much as the next person, but at least 50% of the books I scroll through are Hercule Poirot stories. This might just be me shooting myself in the foot and demanding to see a podiatrist, but it would be really nice if all of those books could be collapsed into just a few items representing the handful of series they're comprised of.


On the other Raspberry Pi 3B, athena, I have Traefik running as a reverse proxy to all the aforementioned services. I'm probably underusing Traefik since it's not being used as a Kubernetes Ingress nor is it able to hook into the Docker Engine of all the different hosts in the lab. In fact, I probably could have just used Nginx as a reverse proxy and stuck with something boring and I'll admit, I saw some buzz on r/homelab about Traefik the same week a coworker mentioned it, so I was definitely following the hype for little practical reason. However, if I ever do decide to consolidate all my Dockerized services on a single host or go the Kubernetes route or whatever, Traefik is already in place to make things easier.


All of the hosts in my homelab (as well as my desktop, laptop, and phone) are in a Tailscale VPN (or a Tailnet, as the cool kids say 😎). The Internet is full of people raving about Tailscale, and I definitely agree with them. It feels like magic to set it up for the first time. It's a one-click install on It has a NixOS module, so on a few of my machines it's zero clicks! and you're off to the races. I used it to set up a Minecraft server a few months ago, and it was absolutely painless compared to the usual port forwarding setup.

As much as I love Tailscale for its core functionality, there are a few really annoying problems I have with it. For starters, it's very annoying that there's no first-party authentication offered. I decided to sign up via my GitHub account, which is as close as I'll probably get to an account that I'll always have access In fact, I kind of went out of my way to not sign up with my Google account. I've heard way too many horror stories of people getting locked out of their entire lives because some Google ML model misfired and detected a security breach or "malicious activity" on their account and banned it. . The problem with signing up via GitHub, though, is that you can only have one user in your organization. This makes it kind of a pain when you want to share every device in the Tailnet with someone else. Thankfully, you can still share devices with users outside your organization, but it's on a device-by-device basis. So, when I want to share my Calibre library, Jellyfin server, Minecraft server, etc. with another person, I have to invite them to every single device one-by-one. A nitpick, but definitely annoying.


For my LAN DNS, I have my router point to PiHole running on gorgon, one of the Pi 3Bs. This blackholes DNS requests to domains blacklisted for hosting adware, and it works like an absolute charm. I recommend anyone with a spare Raspberry Pi laying around to install PiHole if they're annoyed by ads and want a fun afternoon project. Tailscale also allows setting a VPN-specific DNS server for devices actively connected to the VPN. Since gorgon is running Tailscale, I've set its Tailnet IP to be the Tailnet's DNS server.

I also have this domain,, registered via NameCheap and pointing to DigitalOcean's nameservers (this blog lives on DigitalOcean, and the A record is registered there). I originally wanted to set up BIND for DNS such that some records within the domain would be served by the internal homelab DNS server, and then all the public sites (like this blog!) would have authoritative records via Apparently there's an exact name for what I'm doing, and it's called "split horizon" DNS. I don't quite remember why that gave me trouble, but I ended up splitting the difference and just registering all the internal DNS records on the public DigitalOcean DNS server. I don't think this is a huge mistake since all the records point to the Tailnet IP addresses and are still publicly inaccessible, but it's suboptimal for sure.


The router that I'm currently using is just the freebie one that my ISP included when I first signed up for service. I have to say, it really, really stinks. Probably the worst thing about it is that there's no local, browser-accessible web page that lets you twiddle the settings of the router. Every single router I've ever used from the mid-2000's up through last year had a pretty utilitarian site available at or whatever the gateway IP was. I'm not the biggest GUI fan for network configs, but it was always easy to update the SSID or network password or create a static IP address. Instead of having something like this available, The Powers That Be at my ISP decided to move all of this functionality into an ungodly mobile app.

I'm not a huge neo-Luddite when it comes to software (e.g., I don't think JavaScript is the worst nor do I think we should RETVRN TO TRADITION and make every website look/feel like Craigslist), but this really does make me see their point of view. We had a perfectly working solution with routers hosting a simple web page, and it was accessible by any device with a web browser! The mobile app almost has to be more expensive to develop!! It's strictly worse than the old thing, it's so much more buggy!!! Now, whenever I want to do the simplest thing, I have to grab my smartphone and suffer through a really terrible user experience. It takes about three dialog confirmations (all of which are basically, "Are you really sure you know what you're doing?") to change the DNS server the router points to or reserve a DHCP lease, which is a total pain when you're self-hosting things. It also implies that my ISP is able to configure my router programmatically from outside the network, which is kind of chilling. Two thumbs down, I'm gonna get rid of this thing ASAP.

Well, I think that's just about everything currently living in my homelab. I have my desktop and laptop running NixOS and MacOS with nix-darwin respectively, but they really only interact with everything else via the Tailnet with ssh or web clients.


There's a ton of stuff I want to do to improve the lab. In no particular order,

  • Physically move everything into a rack or other container. I'm sick of looking at Spaghetti Junction on the ground next to my desk. This also includes getting a patch panel, punching down cables, etc.
  • Install a better router. I even have an old spare laying in my closet since I moved, I've just been too ~~lazy~~ hesitant to disrupt the big tangle of wires.
  • Perhaps investigate Kavita as an alternative to Calibre. It seems to be a little more focused on manga/comics than typical eBooks, but it also seems like the web interface and Docker image are much more fleshed out than Calibre's.
  • Set up Sonarr/Radarr/Readarr. I've heard great stuff about the *arr family of software, and it could automate some of the more annoying tasks demanded by self-hosting media (getting decent subtitles for movies/shows come to mind).
  • Figure out the DNS situation. Like I mentioned before, it's probably not a huge deal that my Tailnet IPs are exposed on the Internet, but it still gives me the heebie jeebies knowing that stuff is out there. I also want people to think I'm a very cool dude, so the possibility of an A record for being publicly visible kind of compromises that.
  • Nixify all the things. Some of these RasPis (particularly gorgon, the one running PiHole) have been running unattended for years, and I'm terrified of what sorts of critical configs have been set and totally forgotten about. With NixOS, I can have everything nice and declarative and managed in the same space as my dotfiles. It'd also be nice to manage and deploy all the machines via nixops or morph or whatever is the standard now, but that seems like a pretty swiftly moving landscape.
  • Kubernetes is something I've been considering for a while. All the software (except for PiHole) is running via Docker containers spread amongst the tiny fleet of hosts, and it will only get more difficult to know what's running what (or keep rigorous documentation of that) sometimes. I would like to be able to treat all the hosts as a pool of free compute, and just let k8s do the scheduling and distributed systems stuff for me. I've also just been meaning to dive deeper into k8s anyway; I've used EKS and managed Kubernetes instances in the past, but I feel like I've been missing out on a huge amount of skillz by not managing my own bare metal cluster.

Well, that's it for this time.