IMPUC #1 · Homelab overview
Posted on 2020-01-05 by Yarmo Mackenbach
"In My Particular Use Case" (or IMPUC) is a series of short posts describing how I setup my personal homelab, what worked, what failed and what techniques I eventually was able to transfer to an academic setting for my PhD work.
Why a homelab?
I started my homelab about a year after I started my PhD. My academic work was challenging in a technical way, with new data generated every day, managing raw data, processed data, metadata. I built a number of tools that would aid me on my daily basis for my work but I needed a place to just try out every technology I could possibly need for my job. It eventually turned out that the homelab was destined to do far greater things than simply serve as a testbed but that's how it started and what provided me the knowledge and experience to solve important issues in my academic work.
The central server
So one day, I ordered myself an Intel nuc with a 5th generation i3 processor, 8 GB of RAM, an m.2 drive and got started. Container solutions caught my attention before I even had the machine so I first installed docker and later, docker-compose. This setup hasn't changed a bit today as it still allows me to launch new services very easily by changing a single yaml file with minimal impact on the hosting machine. The first things I would install were several databases and gitea, a self hosted git service. The services sit behind a reverse proxy (traefik) to allow them to be accessed by using (sub)domains. Configuration of the machine is managed by a folder of dotfiles backed up in a git repo and
stowed as necessary, but I am currently looking into ansible for this purpose. A 4-bay JBOD USB3 device provides the storage that the nuc then also (partly) makes available over the local network via smb.
The peripheral Pi's
Floating around the central server are several raspberry pi's. Back when I first started, the central server would sometimes crash or soft-lock and since my entire system monitoring system (telegraf+influxdb+grafana) was also installed on there, there was not a whole lot of investigating and fixing I could immediately do. Now, the central server and the pi's all run telegraf and a single pi now hosts the influxdb+grafana stack and only that. Another pi is acting as a media center (Kodi) and finally, two redundant pi's function as DNS forwarders (Pi-Hole), one of which also hosts my VPN solution (wireguard).
The out-of-house computing
I have two permanent VPS's running: a website server (Cloudways) and a mail server (mailcow). Both could be hosted on the central server but as long as I cannot guarantee a perfectly stable Internet connection (which my house does not have) nor stable computing (personal budget issue), I choose to host these outside of the house.
Thanks for reading this, more posts will come soon explaining with more depth some of the elements described above. If you have questions, you can find several ways to contact on yarmo.eu.