Homelab Bitacora
Running log of my homelab setup — from bare metal to self-hosted everything.
The hardware is selected and purchased. Now it’s time to make it do things. This is the running log of everything that happens from here — installing the OS, configuring storage, setting up apps, breaking things, fixing things.
Building the machine
Not much to say here. Motherboard, RAM, drives, case, cables — plug everything in and hope for no dead-on-arrival surprises. I didn’t get any. If you need the parts list, it’s all in the hardware bench note.
The interesting part starts now.
TrueNAS installation
I went with TrueNAS SCALE, which is the Linux-based version (as opposed to TrueNAS CORE, which runs on FreeBSD). SCALE is where iXsystems is putting their energy these days, and it supports Docker containers natively, which I’ll need later for Jellyfin and the *arr stack.
Creating the boot drive
Downloaded TrueNAS SCALE 24.10.2.1 from the official site and flashed it onto a USB stick using balenaEtcher. Nothing fancy — select the ISO, select the drive, flash. The USB stick becomes the installation media, not the final boot drive.
Installing the OS
Plugged the USB into the NAS, connected a monitor and keyboard, booted from USB. The TrueNAS installer walks you through it. I installed it onto the 128GB SATA SSD (the boot drive from the hardware list). Straightforward — pick the drive, confirm, wait.
First login
Once installed, I removed the USB stick and rebooted. TrueNAS shows you the machine’s IP address on the console. From there, it’s all web UI — open a browser on any computer in the same network and go to that IP.
I set up the admin password during installation and logged in. The web dashboard loaded up fine.
The default admin username is truenas_admin. Not admin, not root — truenas_admin. It’s an odd choice and I spent a minute confused before reading the docs. Just know it going in.
First round of housekeeping
Before doing anything real, some baseline configuration. All of this lives under System > General in the web UI:
- Timezone — changed to UTC. Keeps logs consistent and avoids daylight saving headaches.
- Hostname — renamed from the default to something recognizable on the network.
- 2FA — activated. It’s a machine sitting on my home network with all my data. No excuse to skip this.
- SSH — enabled the service and set it to start automatically on boot. I want to be able to
sshinto the box without having to toggle it on from the web UI every time.
Small stuff, but the kind of small stuff you forget and then wonder why your logs have the wrong timestamps three months later.
Storage
ZFS time. I have two drives to work with: the 8TB IronWolf for actual data and the 1TB NVMe for apps. Each gets its own pool.
Creating the pools
Two pools, two purposes:
tank— the 8TB IronWolf. This is where everything lives: media, photos, books, configs. Single-disk stripe for now (no redundancy, I know — it’s a first iteration). Compression set to LZ4 by default, which is basically free.apps— the 1TB NVMe. Fast storage for app data, containers, databases. Anything that benefits from speed over capacity.
Both created via Storage > Create Pool. Pick the disk, name the pool, accept the defaults, done.
After creating the apps pool, I went to Apps > Settings > Choose Pool and pointed TrueNAS at it. This is where all app containers and their configs will live.
Datasets
Inside tank, I created the following datasets via Datasets > tank > Add Dataset:
| Dataset | Purpose |
|---|---|
media | Movies, TV shows, music — the whole reason this project exists |
photos | Family photo archive |
books | eBooks and audiobooks |
plex-config | Plex/Jellyfin configuration (keeping it separate from media) |
nextcloud | Cloud storage, if I get there |
apps | Overflow app data that doesn’t fit the NVMe |
home | Home directories for local users |
No encryption. The box is sitting in my house — if someone has physical access to the drives, I have bigger problems. The tradeoff of unlocking on every reboot wasn’t worth it for my setup.
Verification
Quick sanity check from the shell:
zpool status # two healthy pools
zfs list # all datasets visible
Both returned exactly what I expected. Moving on.
User setup
First order of business: stop using truenas_admin. I created a proper user for myself under Credentials > Local Users:
- Username:
ignacio - Home directory:
/mnt/tank/home/ignacio - Full admin privileges
That’s it. One user, full control. Other people who’ll use the homelab (family, friends) will get their accounts inside the individual apps — Jellyfin, Nextcloud, etc. They don’t need TrueNAS-level access.
Verified SSH works with the new user and I can now forget truenas_admin ever existed. Good riddance.
Remote access
The goal here was to reach the NAS from outside my home network. I looked at two approaches: Tailscale (VPN mesh) and a proper domain setup with Nginx Proxy Manager + Cloudflare. Spoiler: I went with just Tailscale.
Tailscale
Tailscale creates a private network between your devices. Each device gets a 100.x.y.z address and can reach the others from anywhere — no port forwarding, no DNS records, no certificates. The free plan covers what I need.
The setup was less straightforward than expected. You can’t just SSH in and curl | sh — TrueNAS has Tailscale as an app in the catalog. But to install it, you need an Auth Key, and to get one of those you need at least two devices already on your Tailscale account. So the actual order was:
- Install Tailscale on my phone and laptop first, creating the account along the way
- Generate an Auth Key from the Tailscale admin console
- Install the Tailscale app on TrueNAS via Apps > Discover Apps, pasting the Auth Key in the settings
Once all three devices were on the tailnet, I could open the TrueNAS web UI from my phone on mobile data. Done.
I should have set up a password manager before this step. The Auth Key is the kind of thing you want stored properly, not scribbled in a notes app. Oh well — next time.
The domain access rabbit hole
I also tried setting up public domain access: Cloudflare DNS records, Nginx Proxy Manager for reverse proxying and SSL, port forwarding on the router. Nginx Proxy Manager installed fine from the TrueNAS app catalog and worked out of the box — no config needed.
But then came the rest. DDNS to keep Cloudflare updated when my home IP changes (a custom script running on cron — no thanks, that’s an unmaintained liability waiting to happen). Cloudflare Tunnel as a cleaner alternative (more complexity for a problem I wasn’t sure I had yet). Router port forwarding (every ISP makes this a different flavor of painful).
I stepped back and asked myself: what do I actually need right now? Remote access to the NAS and its apps. Tailscale already gives me that. The domain setup is nicer — pretty URLs, shareable links, proper SSL — but it’s not necessary for a homelab I’m the primary user of.
Deleted Nginx Proxy Manager. Tabled the whole thing. If I ever need to share a service publicly or get tired of 100.x.y.z addresses, I’ll revisit. For now, Tailscale carries.
Vaultwarden
First real app. Vaultwarden is a lightweight, self-hosted implementation of the Bitwarden password manager. It runs in Docker and barely uses any resources. Perfect first service — small, useful, and I was already annoyed at myself for not having a proper password manager when I set up Tailscale.
Installation
Vaultwarden is available in the TrueNAS app catalog, so no Docker Compose needed. Apps > Discover Apps, search for Vaultwarden, install. The important part is storage configuration.
The install screen gives you two storage sections — one for Vaultwarden data, one for Postgres (the database). For both:
- Type: Host Path
- Vaultwarden Data:
/mnt/tank/vaultwarden/data - Postgres Data:
/mnt/tank/vaultwarden/postgres
I put both on tank rather than the NVMe apps pool. Passwords are irreplaceable — they belong on the storage drive, not the fast-but-expendable one.
The permissions dance
This is where it got slightly annoying. You need to create the directories and set permissions via SSH before the app can start:
mkdir -p /mnt/tank/vaultwarden/data /mnt/tank/vaultwarden/postgres
Then in the app’s storage config, tick Automatic Permissions on both storage entries. This lets TrueNAS set the correct ownership for the container’s user.
Without it, the container can’t write to the data directory and crashes immediately with a PermissionDenied error. I also had to nuke the Postgres directory (rm -rf /mnt/tank/vaultwarden/postgres/*) and restart after a failed first attempt left corrupted state behind. Fresh install, no data lost — but mildly irritating.
In hindsight, the whole thing is a two-minute fix. But the first time around, debugging container permission errors when you’re new to TrueNAS apps feels like more friction than it should be.
Account creation and lockdown
Creating a user through the web UI didn’t work — I had to use the admin panel (accessible at /admin with the admin token set during install). Once in there, account creation went fine.
After that, I disabled signups so nobody else can register. The vault is mine and mine alone.
The Bitwarden browser extension needs a proper HTTPS domain to connect to the server — 100.x.y.z doesn’t cut it. So I’ll need to revisit the Cloudflare Tunnel or Nginx Proxy Manager setup at some point. Not today, but now I have a real reason to do it.
Vaultwarden is running and usable. Moving on.
What’s next
Pending items I’ll get to as the homelab grows:
- Secure the Vaultwarden admin token — the admin panel currently uses a plain text token. Need to generate an Argon2 PHC hash instead. Guide.
- Set up domain access — Cloudflare Tunnel or Nginx Proxy Manager. Required for the Bitwarden browser extension to work (needs proper HTTPS).
- Connect the Bitwarden browser extension — depends on domain access being sorted first.