Homelab Bitacora

Running log of my homelab setup — from bare metal to self-hosted everything.

The hardware is selected and purchased. Now it’s time to make it do things. This is the running log of everything that happens from here — installing the OS, configuring storage, setting up apps, breaking things, fixing things.

Building the machine

Not much to say here. Motherboard, RAM, drives, case, cables — plug everything in and hope for no dead-on-arrival surprises. I didn’t get any. If you need the parts list, it’s all in the hardware bench note.

The interesting part starts now.

TrueNAS installation

I went with TrueNAS SCALE, which is the Linux-based version (as opposed to TrueNAS CORE, which runs on FreeBSD). SCALE is where iXsystems is putting their energy these days, and it supports Docker containers natively, which I’ll need later for Jellyfin and the *arr stack.

Creating the boot drive

Downloaded TrueNAS SCALE 24.10.2.1 from the official site and flashed it onto a USB stick using balenaEtcher. Nothing fancy — select the ISO, select the drive, flash. The USB stick becomes the installation media, not the final boot drive.

Installing the OS

Plugged the USB into the NAS, connected a monitor and keyboard, booted from USB. The TrueNAS installer walks you through it. I installed it onto the 128GB SATA SSD (the boot drive from the hardware list). Straightforward — pick the drive, confirm, wait.

First login

Once installed, I removed the USB stick and rebooted. TrueNAS shows you the machine’s IP address on the console. From there, it’s all web UI — open a browser on any computer in the same network and go to that IP.

I set up the admin password during installation and logged in. The web dashboard loaded up fine.

The admin username

The default admin username is truenas_admin. Not admin, not roottruenas_admin. It’s an odd choice and I spent a minute confused before reading the docs. Just know it going in.

First round of housekeeping

Before doing anything real, some baseline configuration. All of this lives under System > General in the web UI:

  • Timezone — changed to UTC. Keeps logs consistent and avoids daylight saving headaches.
  • Hostname — renamed from the default to something recognizable on the network.
  • 2FA — activated. It’s a machine sitting on my home network with all my data. No excuse to skip this.
  • SSH — enabled the service and set it to start automatically on boot. I want to be able to ssh into the box without having to toggle it on from the web UI every time.

Small stuff, but the kind of small stuff you forget and then wonder why your logs have the wrong timestamps three months later.

Storage

ZFS time. I have two drives to work with: the 8TB IronWolf for actual data and the 1TB NVMe for apps. Each gets its own pool.

Creating the pools

Two pools, two purposes:

  • tank — the 8TB IronWolf. This is where everything lives: media, photos, books, configs. Single-disk stripe for now (no redundancy, I know — it’s a first iteration). Compression set to LZ4 by default, which is basically free.
  • apps — the 1TB NVMe. Fast storage for app data, containers, databases. Anything that benefits from speed over capacity.

Both created via Storage > Create Pool. Pick the disk, name the pool, accept the defaults, done.

After creating the apps pool, I went to Apps > Settings > Choose Pool and pointed TrueNAS at it. This is where all app containers and their configs will live.

Datasets

Inside tank, I created the following datasets via Datasets > tank > Add Dataset:

DatasetPurpose
mediaMovies, TV shows, music — the whole reason this project exists
photosFamily photo archive
bookseBooks and audiobooks
plex-configPlex/Jellyfin configuration (keeping it separate from media)
nextcloudCloud storage, if I get there
appsOverflow app data that doesn’t fit the NVMe
homeHome directories for local users

No encryption. The box is sitting in my house — if someone has physical access to the drives, I have bigger problems. The tradeoff of unlocking on every reboot wasn’t worth it for my setup.

Verification

Quick sanity check from the shell:

zpool status  # two healthy pools
zfs list      # all datasets visible

Both returned exactly what I expected. Moving on.

User setup

First order of business: stop using truenas_admin. I created a proper user for myself under Credentials > Local Users:

  • Username: ignacio
  • Home directory: /mnt/tank/home/ignacio
  • Full admin privileges

That’s it. One user, full control. Other people who’ll use the homelab (family, friends) will get their accounts inside the individual apps — Jellyfin, Nextcloud, etc. They don’t need TrueNAS-level access.

Verified SSH works with the new user and I can now forget truenas_admin ever existed. Good riddance.

Remote access

The goal here was to reach the NAS from outside my home network. I looked at two approaches: Tailscale (VPN mesh) and a proper domain setup with Nginx Proxy Manager + Cloudflare. Spoiler: I went with just Tailscale.

Tailscale

Tailscale creates a private network between your devices. Each device gets a 100.x.y.z address and can reach the others from anywhere — no port forwarding, no DNS records, no certificates. The free plan covers what I need.

The setup was less straightforward than expected. You can’t just SSH in and curl | sh — TrueNAS has Tailscale as an app in the catalog. But to install it, you need an Auth Key, and to get one of those you need at least two devices already on your Tailscale account. So the actual order was:

  1. Install Tailscale on my phone and laptop first, creating the account along the way
  2. Generate an Auth Key from the Tailscale admin console
  3. Install the Tailscale app on TrueNAS via Apps > Discover Apps, pasting the Auth Key in the settings

Once all three devices were on the tailnet, I could open the TrueNAS web UI from my phone on mobile data. Done.

Hindsight

I should have set up a password manager before this step. The Auth Key is the kind of thing you want stored properly, not scribbled in a notes app. Oh well — next time.

The domain access rabbit hole

I also tried setting up public domain access: Cloudflare DNS records, Nginx Proxy Manager for reverse proxying and SSL, port forwarding on the router. Nginx Proxy Manager installed fine from the TrueNAS app catalog and worked out of the box — no config needed.

But then came the rest. DDNS to keep Cloudflare updated when my home IP changes (a custom script running on cron — no thanks, that’s an unmaintained liability waiting to happen). Cloudflare Tunnel as a cleaner alternative (more complexity for a problem I wasn’t sure I had yet). Router port forwarding (every ISP makes this a different flavor of painful).

I stepped back and asked myself: what do I actually need right now? Remote access to the NAS and its apps. Tailscale already gives me that. The domain setup is nicer — pretty URLs, shareable links, proper SSL — but it’s not necessary for a homelab I’m the primary user of.

Deleted Nginx Proxy Manager. Tabled the whole thing. If I ever need to share a service publicly or get tired of 100.x.y.z addresses, I’ll revisit. For now, Tailscale carries.

Vaultwarden

First real app. Vaultwarden is a lightweight, self-hosted implementation of the Bitwarden password manager. It runs in Docker and barely uses any resources. Perfect first service — small, useful, and I was already annoyed at myself for not having a proper password manager when I set up Tailscale.

Installation

Vaultwarden is available in the TrueNAS app catalog, so no Docker Compose needed. Apps > Discover Apps, search for Vaultwarden, install. The important part is storage configuration.

The install screen gives you two storage sections — one for Vaultwarden data, one for Postgres (the database). For both:

  • Type: Host Path
  • Vaultwarden Data: /mnt/tank/vaultwarden/data
  • Postgres Data: /mnt/tank/vaultwarden/postgres

I put both on tank rather than the NVMe apps pool. Passwords are irreplaceable — they belong on the storage drive, not the fast-but-expendable one.

The permissions dance

This is where it got slightly annoying. You need to create the directories and set permissions via SSH before the app can start:

mkdir -p /mnt/tank/vaultwarden/data /mnt/tank/vaultwarden/postgres

Then in the app’s storage config, tick Automatic Permissions on both storage entries. This lets TrueNAS set the correct ownership for the container’s user.

Don't skip Automatic Permissions

Without it, the container can’t write to the data directory and crashes immediately with a PermissionDenied error. I also had to nuke the Postgres directory (rm -rf /mnt/tank/vaultwarden/postgres/*) and restart after a failed first attempt left corrupted state behind. Fresh install, no data lost — but mildly irritating.

In hindsight, the whole thing is a two-minute fix. But the first time around, debugging container permission errors when you’re new to TrueNAS apps feels like more friction than it should be.

Account creation and lockdown

Creating a user through the web UI didn’t work — I had to use the admin panel (accessible at /admin with the admin token set during install). Once in there, account creation went fine.

After that, I disabled signups so nobody else can register. The vault is mine and mine alone.

Vaultwarden is running and usable. Moving on.

Cloudflare Tunnel

Remember the domain access rabbit hole from earlier? Time to revisit. The reason is practical: the Bitwarden browser extension needs a proper HTTPS domain to connect to the server. Tailscale’s 100.x.y.z addresses don’t cut it.

This time I went straight to Cloudflare Tunnel — the option I’d dismissed as “more complexity” the first time around. Turns out it’s actually the cleanest solution. No port forwarding, no DDNS scripts, no Nginx Proxy Manager. The tunnel runs from inside your network out to Cloudflare, so nothing needs to be opened on the router.

Setting up the connector

The Cloudflare dashboard has been reorganized since I last looked. Tunnels now live under Networks → Connectors in the Zero Trust dashboard. Created a new connector, chose Cloudflared as the type, named it, and grabbed the tunnel token.

First thing I did with that token? Stored it in Vaultwarden. Already paying for itself.

Deploying cloudflared on TrueNAS

There’s a cloudflared app in the TrueNAS catalog — no custom Docker setup needed. Installed it, pasted the tunnel token in the config, and deployed. Back on the Cloudflare dashboard, the connector status flipped to Healthy almost immediately.

Exposing Vaultwarden

In the connector settings, I added a public hostname:

  • Subdomain: vault
  • Domain: my domain
  • Service type: HTTPS
  • URL: <NAS-local-IP>:<port>
  • TLS: No TLS Verify enabled

The service type tripped me up initially. I started with HTTP and got a 502 Bad Gateway — the tunnel reached Cloudflare fine but couldn’t connect to Vaultwarden. The issue was that Vaultwarden was running with HTTPS (self-signed cert from the initial setup), so the tunnel needed to speak HTTPS to it. Switching the service type and enabling No TLS Verify (since the cert is self-signed) fixed it.

On internal HTTPS

Is running a self-signed cert internally with No TLS Verify ideal? Not really. But it doesn’t matter much. The traffic flow is:

User → Cloudflare (valid public cert) → encrypted tunnel → NAS

The tunnel connection itself is already encrypted. Running plain HTTP internally is the standard approach for Cloudflare Tunnel setups. I could switch Vaultwarden back to HTTP and simplify things, or use a Cloudflare Origin Certificate if I wanted HTTPS on the internal leg. For a homelab, it’s unnecessary complexity. What I have works.

vault.tinkerer.tools loads the Vaultwarden login page from anywhere. No VPN needed for this one.

AdGuard Home — considered, deferred

Next on the list was AdGuard Home for network-wide ad blocking. I thought about it but decided to skip it for now.

The concern: I work from home, and my work laptop likely runs a VPN and corporate security tools. Setting AdGuard as the router-level DNS would route everything through it, and corporate MDM/SSO/endpoint agents can break if their telemetry or management domains get blocked. The safe approach is per-device DNS — leave the router untouched and manually point each personal device at AdGuard. That works, but it’s extra setup, and uBlock Origin in the browser already covers most ad blocking without any network-level complexity.

AdGuard Home becomes more compelling when you want blocking on devices that don’t support browser extensions — smart TVs, phone apps, IoT. Setting DNS on a smart TV is apparently just one field in network settings, so not as painful as I assumed. But there’s no urgent need right now.

Media stack — planning

The big one. The whole reason this NAS exists: automated media management. Here’s what the full stack looks like:

AppRole
JellyfinMedia server
SonarrTV shows + anime
RadarrMovies
ProwlarrIndexer manager for Sonarr/Radarr
qBittorrentTorrent client
BazarrAutomated subtitles

Jellyfin over Plex

I originally planned to use Plex, but switched to Jellyfin. The reason is hardware transcoding — the N355’s Intel QuickSync iGPU can handle it, but Plex locks that behind a Plex Pass subscription. Jellyfin does it for free. It’s also fully self-hosted with no account required, which fits the homelab philosophy better. Direct play on the LAN works fine either way, but remote streaming without HW transcoding would struggle.

The VPN question

Torrenting without a VPN on a Spanish ISP is not optional. This one took some research.

ProtonVPN free was out immediately — the free tier doesn’t allow P2P traffic.

Mullvad looked perfect at first: €5/month, no account needed, no email, WireGuard config that’s trivially easy to wire up with Gluetun in Docker. The homelab community loves it. But then I found out Mullvad removed port forwarding in July 2023. Without port forwarding, you can download fine but your seeding ability tanks — you’re not connectable to other peers. For anyone on private trackers where ratio matters, it’s basically a non-starter. On public trackers it’s less critical but still hurts.

Spanish legal context added another wrinkle. A court in Córdoba ordered ProtonVPN and NordVPN to block access to 16 pirate LaLiga streaming sites. Mullvad wasn’t named. But digging into it, the order is specifically about blocking streaming IPs — it has nothing to do with logging torrent users or degrading VPN service. Both providers still work fine from Spain for P2P.

ProtonVPN paid ended up being the right answer. It supports P2P with port forwarding via NAT-PMP, works with Gluetun natively, and the homelab community has well-documented Docker setups for exactly the ProtonVPN + Gluetun + qBittorrent combination. Several people migrated from Mullvad to ProtonVPN specifically after the port forwarding removal.

I went with the 24-month plan at €2.99/month (€71.76 total — less than one IronWolf drive). Credentials stored in Vaultwarden, calendar reminder set for renewal. The OpenVPN/IKEv2 username and password (different from the login credentials) are saved too — Gluetun will need those.

Folder structure

Before deploying anything, I set up the media directory tree on tank:

sudo mkdir -p /mnt/tank/media/{movies,tv,anime}
sudo mkdir -p /mnt/tank/media/downloads/{complete,incomplete}

The sudo is needed because TrueNAS datasets are root-owned by default. Ownership gets sorted out in the UID 568 section below.

The key principle: everything lives under one /mnt/tank/media tree. When Sonarr or Radarr “move” a finished download to the library folder, it’s an instant hardlink on the same ZFS dataset rather than a slow copy between pools.

DirectoryPurpose
media/moviesRadarr drops finished films here
media/tvSonarr drops TV shows here
media/animeSonarr handles anime here (via tags or a separate instance)
media/downloads/completeqBittorrent moves finished downloads here
media/downloads/incompleteActive downloads in progress

Gluetun + qBittorrent

These two had to be deployed together — qBittorrent routes all its traffic through Gluetun’s VPN tunnel, so they share a network stack. TrueNAS 24.10 has experimental Docker Compose support in the Custom App screen, which was perfect for this.

Here’s the final compose file (after several rounds of debugging — see below):

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    ports:
      - "8080:8080"
    volumes:
      - /mnt/tank/apps/gluetun:/gluetun
    environment:
      TZ: Atlantic/Canary
      VPN_SERVICE_PROVIDER: protonvpn
      VPN_TYPE: wireguard
      WIREGUARD_PRIVATE_KEY: <private-key>
      SERVER_COUNTRIES: United States
      VPN_PORT_FORWARDING: "on"
      PORT_FORWARD_ONLY: "on"
      VPN_PORT_FORWARDING_UP_COMMAND: >-
        /bin/sh -c 'wget -O- -nv --retry-connrefused --waitretry=1 --tries=15
        --post-data "json={\"listen_port\":{{PORT}},\"random_port\":false,\"upnp\":false}"
        http://127.0.0.1:8080/api/v2/app/setPreferences'
      VPN_PORT_FORWARDING_DOWN_COMMAND: >-
        /bin/sh -c 'wget -O- -nv --retry-connrefused --waitretry=1 --tries=15
        --post-data "json={\"listen_port\":0}"
        http://127.0.0.1:8080/api/v2/app/setPreferences'
    restart: unless-stopped

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    depends_on:
      - gluetun
    network_mode: "service:gluetun"
    environment:
      PUID: "568"
      PGID: "568"
      TZ: Atlantic/Canary
      WEBUI_PORT: "8080"
    volumes:
      - /mnt/tank/apps/qbittorrent:/config
      - /mnt/tank/media:/media
    restart: unless-stopped

  radarr:
    image: lscr.io/linuxserver/radarr:latest
    container_name: radarr
    environment:
      PUID: "568"
      PGID: "568"
      TZ: Atlantic/Canary
    ports:
      - "7878:7878"
    volumes:
      - /mnt/tank/apps/radarr:/config
      - /mnt/tank/media:/media
    restart: unless-stopped

The key design: network_mode: service:gluetun means qBittorrent’s traffic never touches the NAS’s real IP — everything goes through the VPN tunnel. Radarr is also in the compose to guarantee consistent volume mounts (more on why in the debugging section). A few things worth noting:

  • WireGuard over OpenVPN — the initial setup used OpenVPN, but it was unstable (see debugging below). Switched to WireGuard, which only needs the private key from the generated config.
  • Auto port update — Gluetun’s VPN_PORT_FORWARDING_UP_COMMAND calls qBittorrent’s API to set the listen port whenever the forwarded port changes. This keeps seeding healthy without manual intervention.
  • Map-form environment — the environment section uses KEY: value syntax instead of - KEY=value. The list form caused YAML parsing failures because the long shell commands contain colons that YAML interpreted as key-value separators.
  • Bypass auth for localhost — qBittorrent requires authentication by default, even from localhost. Enable Bypass authentication for clients on localhost in qBittorrent’s Web UI settings so Gluetun’s port-forwarding command can reach the API.

The UID 568 problem

TrueNAS catalog apps don’t run as UID 1000 — they run as UID 568. This means any app installed from the catalog can’t write to directories owned by your regular user. The compose file needed updating too (PUID/PGID changed from 1000 to 568, reflected in the YAML above).

The fix is to make everything consistent:

sudo chown -R 568:568 /mnt/tank/media /mnt/tank/apps

One user across the entire stack — catalog apps and compose containers alike. No permission conflicts.

Installing the rest

All five remaining apps came from the TrueNAS app catalog. The setup pattern is the same for each: config storage as a Host Path under /mnt/tank/apps/<appname>, plus /mnt/tank/media mounted as additional storage at /media so they can all see the media tree.

Jellyfin needed a bit more attention:

  • Separate directories for config, cache, and transcode storage
  • GPU passthrough enabled — this gives Jellyfin access to the N355’s Intel QuickSync iGPU for hardware transcoding
  • Host networking — makes DLNA discovery work on the LAN

Here’s what everything ended up on:

AppPort
Prowlarr30050
Sonarr30113
Radarr30025
Bazarr30046
Jellyfin30013
qBittorrent8080

Movistar DNS blocking

First roadblock: Prowlarr couldn’t resolve any indexer domains. Adding 1337x gave a “Name does not resolve” error. Running nslookup 1337x.to from inside the container confirmed it — Docker’s internal DNS was forwarding to the host’s DNS, which is Movistar’s, and Movistar blocks torrent site domains at the DNS level.

The fix: tell Docker to use Cloudflare DNS instead. Merge "dns": ["1.1.1.1", "8.8.8.8"] into /etc/docker/daemon.json (don’t overwrite the file — TrueNAS manages it and it has other important config), then restart Docker.

TrueNAS may overwrite daemon.json

TrueNAS manages /etc/docker/daemon.json itself. The DNS fix works immediately but may get wiped on reboot or system update. If indexers stop resolving after a reboot, check the file and re-add the dns key.

FlareSolverr is dead

Even with DNS fixed, some indexers like 1337x sit behind Cloudflare’s browser protection. The traditional fix was FlareSolverr — a proxy that solves Cloudflare challenges. But it’s deprecated and non-functional now. Cloudflare caught on.

The workaround is simpler: use indexers that don’t need it. These all worked without issues:

  • International: YTS (movies), EZTV (TV), Nyaa (anime), The Pirate Bay, LimeTorrents, TorrentGalaxy
  • Spanish: EliteTorrent and others available in Prowlarr’s indexer list

Six indexers is plenty to start.

Wiring the stack together

With everything installed, the integration chain:

  1. Prowlarr → Sonarr + Radarr — Settings → Apps → add each with their API key (found in each app under Settings → General). This syncs all your indexers to both apps automatically.
  2. Sonarr + Radarr → qBittorrent — Settings → Download Clients → add qBittorrent with the NAS IP, port 8080, and credentials.
  3. Root folders — Sonarr: /media/tv and /media/anime. Radarr: /media/movies.
  4. Bazarr → Sonarr + Radarr — Connect with API keys, set Spanish and English as subtitle languages.
  5. Jellyfin — Run through the setup wizard: create admin user, add media libraries pointing to /media/movies, /media/tv, /media/anime.
Sonarr: manual search for existing episodes

Sonarr’s RSS feed only catches brand-new releases as they appear. If you add an existing show and wonder why nothing downloads — you need to trigger a manual search (the magnifying glass icon on the series page). The “Unknown Series” rejections in the logs are normal — that’s Sonarr ignoring shows you haven’t added to your library.

The full pipeline is working: search in Sonarr/Radarr → download via qBittorrent (through VPN) → file lands in the right folder → Jellyfin picks it up.

Media stack — debugging

Or: why the compose YAML above looks nothing like the first version. What started as “slow downloads” turned out to be several separate issues stacked on top of each other, where fixing one exposed the next.

Slow downloads and OpenVPN instability

The first symptom was download speeds around 1 MiB/s. The initial suspicion was the usual torrent checklist — speed limits, bad swarm, missing port forwarding. But the Gluetun logs told a different story: the OpenVPN tunnel was constantly flapping. MTU discovery failures, healthcheck timeouts, VPN restarts in a loop.

Trying to tune MTU settings made things worse. The real fix was simpler: switch from ProtonVPN OpenVPN to WireGuard. Generated a WireGuard config from ProtonVPN’s dashboard (with NAT-PMP enabled), extracted the private key, and used it in Gluetun:

VPN_TYPE: wireguard
WIREGUARD_PRIVATE_KEY: <private-key>

The whole WireGuard config file isn’t needed — just the private key. After this, the tunnel connected cleanly and stayed up. This was the turning point.

The forwarded-port command

With a stable tunnel, the next issue surfaced: Gluetun was getting a forwarded port from ProtonVPN but failing to update qBittorrent. Three separate problems in the same command:

  1. {{PORTS}} vs {{PORT}} — the original placeholder was {{PORTS}} (plural), but qBittorrent’s listen_port expects a single integer. Changed to {{PORT}}.
  2. Bad JSON payload — the payload included a current_network_interface field that qBittorrent rejected silently. Removing it and keeping just listen_port, random_port, and upnp fixed the POST.
  3. YAML parsing — the long shell command with colons in the URL caused Docker Compose to interpret the line as a map instead of a string. Switching from - KEY=value list form to KEY: value map form for the entire environment section fixed it.

The decisive debugging step was manually testing the API from inside the Gluetun container:

# Verify API is reachable
wget -S -O- http://127.0.0.1:8080/api/v2/app/version
# Test the actual POST
wget -S -O- --post-data 'json={"listen_port":39662,"random_port":false,"upnp":false}' \
  http://127.0.0.1:8080/api/v2/app/setPreferences

Both returned 200 OK — proving the issue was the payload content, not auth or connectivity.

The path mismatch

Movies were downloading fine but not showing up in Jellyfin. Radarr logs told the story:

Import failed, path does not exist or is not accessible by Radarr: /downloads/...

qBittorrent saw completed files at /downloads/..., but that path didn’t exist inside the Radarr container. The two apps had different volume mounts — qBittorrent had /mnt/tank/media/downloads mounted at /downloads, while Radarr had /mnt/tank/media mounted at /media.

The fix: mount the entire /mnt/tank/media tree as /media in both containers, then set qBittorrent’s save path to /media/downloads and Radarr’s root folder to /media/movies. This is why Radarr ended up in the compose file — having both apps in the same compose guarantees consistent volume definitions.

With unified paths, Radarr could finally see the completed downloads, import them to the movie library via hardlink, and Jellyfin picked them up on the next scan.

Lessons from the debugging chain

Several separate issues stacked and looked like one problem. Each fix exposed the next blocker. The key insights:

  • Manual API tests were the decisive troubleshooting step for the qBittorrent integration.
  • WireGuard over OpenVPN eliminated tunnel instability in one move.
  • Docker path consistency matters more than any app-specific tweaking — if two containers need to share files, they must see them at the same path.
  • Don’t stack speculative config changes — isolate one change at a time.

Plex

Since the media is already organized under /media/{movies,tv,anime}, adding Plex alongside Jellyfin was trivial — just point it at the same folders. No duplication, no extra storage.

I’m not buying Plex Pass. Jellyfin already handles hardware transcoding for free, which was the main reason to consider it. Plex without Plex Pass still works fine for direct play on the LAN, and its main advantage is better client apps — especially on smart TVs and streaming devices where Jellyfin’s clients are noticeably weaker.

The killer feature: Plex Watchlist integration. Both Sonarr and Radarr have built-in support (Settings → Import Lists → Plex Watchlist). Sign in with a Plex account, set a quality profile and root folder, enable Monitor and Search on Add. Now whenever anyone adds a movie or show to their Plex watchlist, it automatically flows through the entire pipeline — watchlist → Sonarr/Radarr → qBittorrent → library. Zero manual intervention.

Mobile monitoring

nzb360 on Android. One app that connects to qBittorrent, Sonarr, Radarr, and Prowlarr — all using their API keys and the NAS IP. For remote access, swap the local IP for the Tailscale 100.x.y.z address.

Also worth looking at: LunaSea — free, open source, iOS + Android, connects to the same arr stack with webhook push notifications when downloads complete. Generally more polished than nzb360 for the arr stack specifically. With Seerr in the picture (see below), day-to-day usage shifts to browsing and requesting there, with LunaSea/nzb360 reserved for checking download progress or troubleshooting.

Bitwarden browser extension

With vault.tinkerer.tools live via Cloudflare Tunnel, connecting the Bitwarden extension/app was the next step. Should have been trivial — it wasn’t, but only because of one small mistake.

The fix

In the Bitwarden app (or browser extension), the custom server URL field is on the pre-login screen, not in settings:

  1. Open the Bitwarden app/extension
  2. Log out if logged into any account
  3. On the login screen, tap the gear icon or look for “Self-hosted” / “Log in to”
  4. Set the Server URL to: https://vault.tinkerer.tools
  5. Save, then log in with Vaultwarden credentials
Include the https://

I entered vault.tinkerer.tools without the scheme and got a Java stacktrace: CLEARTEXT communication to vault.tinkerer.tools not permitted by network security policy. The Bitwarden Android app enforces HTTPS and won’t fall back to HTTP. The full https:// prefix is mandatory. Cloudflare Tunnel provides the SSL cert, so it works — you just have to tell the app to use it.

Once the URL was correct, login worked immediately. Passwords now sync across all devices via the self-hosted vault. Don’t forget to set the same server URL in the browser extension on the laptop too.

Jellyfin — QuickSync and remote access

Two things that had been sitting on the todo list: enabling hardware transcoding and exposing Jellyfin through the Cloudflare Tunnel for remote viewing.

Enabling Intel QuickSync

Dashboard → Playback → Transcoding:

  • Hardware acceleration: Intel QuickSync Video (QSV)
  • Enable hardware decoding for: H264, HEVC, MPEG2, VC1 (all of them)
  • Allow encoding in HEVC format: enabled — better quality at lower bitrate for remote streams

This only works if the container has GPU access. Since Jellyfin was installed from the TrueNAS catalog with GPU passthrough already enabled, it picked up /dev/dri automatically. To verify it’s actually working: play something on a device that forces a transcode (like a browser at a lower quality setting) and check Dashboard → Active Streams — it should show (HW) next to the transcode.

The N355’s iGPU is surprisingly capable for this. Multiple simultaneous transcodes are realistic now, which matters for remote users.

Exposing via Cloudflare Tunnel

Cloudflare Zero Trust dashboard → Networks → Tunnels → existing tunnel → Public Hostnames → Add:

  • Subdomain: watch
  • Domain: tinkerer.tools
  • Service type: HTTP
  • URL: localhost:8096

https://watch.tinkerer.tools loads the Jellyfin login page from anywhere. The traffic flows through the tunnel directly — this isn’t the same as proxying video through Cloudflare’s CDN (which their ToS restricts), so it’s fine for a handful of users.

User accounts

Jellyfin supports separate user accounts per person: Dashboard → Users → Add User. Each gets their own watch history, continue watching, favorites, and library visibility. I set up accounts for myself, Laura, and my parents. You can restrict which libraries each user sees — my parents don’t need access to the anime library.

These same Jellyfin accounts carry over to Seerr (see below), so everyone can request content with their own login.

Seerr — content discovery and the autonomous loop

This is the piece that makes the whole media stack feel like a product instead of a collection of tools. Seerr (formerly Jellyseerr — the TrueNAS catalog now lists it under the new name) gives everyone a Netflix-like UI to browse trending content, search, and request movies or shows with one click.

Installation

Available in the TrueNAS app catalog. Search for “Seerr” in Apps → Discover Apps, install. Storage config: Host Path at /mnt/tank/apps/jellyseerr for the config data.

Setup wizard

Open http://<NAS-IP>:5055. The wizard has three steps:

  1. Sign in with Jellyfin — use Jellyfin admin credentials, point at the internal URL (http://localhost:8096, not the tunnel URL)
  2. Connect Radarr — select Default Server (not 4K). NAS local IP, port 7878, API key (from Radarr → Settings → General), quality profile, root folder /media/movies. Minimum Availability set to “Released”. Enable Scan and Enable Automatic Search (the key one).
  3. Connect Sonarr — select Default Server. NAS local IP, port 8989, API key, Series Type Standard, quality profile, root folder /media/tv. The anime section: Anime Series Type Anime, Anime Quality Profile (same or different), Anime Root Folder /media/anime. Season Folders on. Enable Scan and Enable Automatic Search.

Hit “Test” before saving each connection to verify profiles and folders load correctly.

Key settings

After the wizard, in Settings:

  • Auto-Approve for admin — my requests skip the queue and go straight to Radarr/Sonarr
  • Import Jellyfin users — Settings → Users → Import Jellyfin Users. Laura and my parents can log into Seerr with their Jellyfin credentials. Per-user auto-approve is configurable
  • Jellyfin External URL — in Seerr’s Jellyfin settings, set the External URL to https://watch.tinkerer.tools. This is what Seerr uses when generating “click to watch” links — without it, links point to the internal IP which doesn’t work remotely

Expose via Cloudflare Tunnel

Cloudflare Tunnel → Public Hostnames → Add:

  • Subdomain: request
  • Domain: tinkerer.tools
  • Service type: HTTP
  • URL: localhost:5055

The full autonomous loop

This is what the complete pipeline looks like now:

Laura opens request.tinkerer.tools
    → browses trending, searches for a movie
    → clicks "Request"
    → Seerr sends it to Radarr with automatic search enabled
    → Radarr searches indexers via Prowlarr
    → qBittorrent downloads through VPN
    → file lands in /media/movies
    → Bazarr grabs subtitles
    → Jellyfin picks it up
    → Laura watches it at watch.tinkerer.tools

Zero intervention from me. This is the setup I wanted from the start.

Seerr vs Plex Watchlist

With Seerr handling content requests, the Plex Watchlist integration in Sonarr/Radarr becomes redundant. Seerr is better in every way: it has a browsable UI with trending content, handles both movies and TV, supports multiple users with approval workflows, and the “Search on Add” flag actually works reliably. The Plex Watchlist was monitoring but not actively searching for existing content — Seerr fixes that.

Moving series to the right root folder

If a series ends up in the wrong root folder (e.g. a show filed under anime instead of TV), fix it in Sonarr: open the series → Edit → change the Root Folder Path → Save. Sonarr moves the files automatically. No shell commands needed.

Bazarr — finally configured

Bazarr had been installed since the media stack setup but never properly configured. The symptom: no subtitles downloading for anything, including Severance (which I’d specifically noticed was missing subs while Better Call Saul had them).

The root cause was simple: no language profile existed. Bazarr won’t do anything without one.

Language profile

Settings → Languages:

  • Created a profile called Spanish + English
  • Spanish as the first language (higher priority)
  • English as the second (fallback)
  • Cutoff set to Spanish — Bazarr stops searching once it finds Spanish subs, grabs English if Spanish isn’t available

Assigned this as the default profile for both Series and Movies in the general settings. This covers everything, including anime — most anime downloads from Nyaa come with Japanese audio, and Bazarr layers the subtitle track on top. If fansub releases already have embedded subs in the right language, enabling Use Embedded Subtitles in Settings → Subtitles prevents Bazarr from downloading duplicates.

Providers

After some research into what actually works hassle-free in 2026:

No account needed (just toggle on):

  • Podnapisi — solid general coverage for English
  • YIFY Subtitles — good for movies
  • Animetosho — anime-specific, pulls from fansub releases
  • Jimaku — another anime provider, recently added to Bazarr
  • SubX — the replacement for the now-removed Subdivx provider, handles Latin American Spanish

Free account required:

  • OpenSubtitles.com — biggest subtitle database overall. Create an account at opensubtitles.com (not the old .org), enter username/password in Bazarr
Providers to skip

Addic7ed requires a paid Anti-Captcha service to function in Bazarr — not worth the hassle. Subdivx was removed from Bazarr entirely as of v1.5.6 (the site stopped working and the provider was replaced by SubX). Subscene is frequently broken or rate-limited. Anything requiring Anti-Captcha or Death by Captcha adds cost and complexity for marginal subtitle improvement.

Six providers, five of which need zero configuration beyond toggling them on. After enabling all of them and triggering a search, Severance picked up subtitles within minutes.

Plex — role shrinking

Plex keeps pushing for Plex Pass payment (nag screens, no HW transcoding without it). With Jellyfin now handling hardware transcoding for free, exposed via the tunnel with user accounts, and Seerr replacing the Watchlist integration, Plex’s role has shrunk to:

  1. Better client apps — Plex’s smart TV and streaming device clients are still more polished than Jellyfin’s

That’s about it. I’m keeping Plex installed for now but it’s increasingly optional. If Jellyfin’s clients improve (or I stop caring about the UI difference), it’s gone.

Tunnel and access strategy — simplification

The original plan had both Tailscale (mesh VPN) and Cloudflare Tunnel (public hostnames) running on the NAS. In practice, maintaining two access layers for a homelab used by four people is unnecessary complexity.

New approach: Cloudflare Tunnel handles everything public-facing. Tailscale stays installed only on my personal devices (phone, laptop) as a lightweight admin backdoor — useful for SSH access and reaching things like qBittorrent that shouldn’t be on the public internet.

qBittorrent was briefly exposed through the tunnel with username/password auth. Pulled it back — qBittorrent’s web UI has had authentication bypass CVEs in the past, and there’s no reason to expose a torrent client publicly when I only check it occasionally. It’s now accessible only via Tailscale or on the LAN.

Current tunnel hostnames:

SubdomainServicePort
vault.tinkerer.toolsVaultwardenHTTPS to internal port
watch.tinkerer.toolsJellyfin8096
request.tinkerer.toolsSeerr5055

Storage cleanup

Two things needed attention: seeded torrents filling up disk, and no system for removing watched content over time.

qBittorrent — seed ratios

In the qBittorrent web UI: Tools → Options → BitTorrent → Seeding Limits. Set the ratio limit to 2.0 and the action to Remove torrent and its files. By the time qBittorrent’s own ratio limit kicks in, Radarr and Sonarr have already imported the files to the media library via hardlink. Whatever’s left in the download directory is a duplicate that’s safe to delete.

Radarr & Sonarr — completed download handling

In both apps: Settings → Download Clients. Under “Completed Download Handling,” enable Remove Completed. This tells the arr app to send a remove command to qBittorrent after a download has been imported to the media folder.

The two layers work together: Radarr/Sonarr remove the torrent after import (the normal path), and qBittorrent’s ratio limit catches anything that slips through (the safety net). The download directory stays clean without manual intervention.

Seeding etiquette

With Remove Completed enabled, the arr apps remove torrents right after import — which means barely any seeding happens. On public trackers nobody’s tracking your ratio, so this is fine functionally. If you want to contribute back, the Prowlarr indexer settings have per-indexer Seed Ratio and Seed Time fields (in minutes). Setting Seed Time to 1440 (24 hours) on each indexer gives you a day of seeding before cleanup, but it means managing the setting per indexer in both Radarr and Sonarr. I decided the qBittorrent-only approach was simpler and clearer.

Maintainerr — library lifecycle management

The other side of storage cleanup: preventing the media library from growing forever. Maintainerr is an automated tool that identifies unwatched or old content, shows it in a “Leaving Soon” collection on the home screen, and deletes it after a grace period. It integrates with Jellyfin, Radarr, Sonarr, and Seerr.

Deployed as a custom Docker Compose app:

services:
  maintainerr:
    container_name: maintainerr
    image: ghcr.io/maintainerr/maintainerr:latest
    user: 1000:1000
    volumes:
      - /mnt/tank/apps/maintainerr:/opt/data
    environment:
      - TZ=UTC
    ports:
      - 6246:6246
    restart: unless-stopped

Data stored at /mnt/tank/apps/maintainerr. The web UI at http://<NAS-IP>:6246 has a setup wizard that connects to Jellyfin, Radarr, Sonarr, and Seerr using their API keys — same pattern as every other arr app integration. Rules are created through the GUI: things like “movie older than 90 days and not watched by any user” or “TV series completed and unwatched for 60 days.” Matched media goes into a collection with a configurable grace period before deletion.

Janitorr — the alternative

Janitorr is a similar tool built specifically for Jellyfin. It’s more powerful in some ways (disk-space-aware deletion, tag-based schedules) but requires hand-editing a YAML config file with no GUI. For a setup where simplicity matters, Maintainerr’s web interface is the better fit. If Maintainerr’s Jellyfin support ever feels flaky (it was Plex-first and added Jellyfin later), Janitorr is the fallback.

Jellyfin — removing ghost entries

Ran into an issue where a show I’d moved out of the anime library via Sonarr still appeared in Jellyfin. Sonarr had moved the files correctly, but left an empty folder behind on disk. Jellyfin’s metadata was hanging onto it even after multiple refreshes from the home screen.

The fix was two steps. First, SSH in and delete the empty folder:

rm -r /mnt/tank/media/anime/Name-Of-Show

Then in Jellyfin: Dashboard → Libraries → the anime library → click the three dots → Scan Library. A regular refresh from the home screen isn’t enough — it needs a full library scan to actually drop entries for content that no longer exists on disk. If even that doesn’t work, go to the show’s page in Jellyfin and use the Delete option from the menu to force-remove the metadata entry.

Vaultwarden — admin token hardening

The admin panel was using a plain text token, which Vaultwarden warns about on every startup. The fix is to replace it with an Argon2 PHC hash — the admin panel then accepts the password you used to generate the hash, while the stored token is cryptographically hashed.

Generating the hash

First, find the actual container name TrueNAS assigned (it’s not just vaultwarden):

sudo docker ps | grep -i vault

In my case it was ix-vaultwarden-vaultwarden-1. Then generate the hash:

sudo docker exec -it ix-vaultwarden-vaultwarden-1 /vaultwarden hash --preset owasp

It prompts for the password you want to use for the admin panel and outputs an Argon2 PHC string starting with $argon2id$.... Copy the entire string.

The TrueNAS catalog gotcha

The obvious approach — paste the hash into the ADMIN_TOKEN field in the TrueNAS app config (Apps → Installed → Vaultwarden → Edit) — doesn’t work. The $ characters in the Argon2 hash get interpreted as Docker environment variable references, and the token arrives inside the container mangled. Escaping with $$ didn’t help either. Multiple people in the TrueNAS community forums hit the same wall.

The actual fix: edit config.json directly

Vaultwarden stores its runtime configuration in a config.json file inside the data directory. This file overrides environment variables and doesn’t go through Docker’s variable expansion. Find it:

sudo cat /mnt/tank/vaultwarden/data/config.json

Look for the "admin_token" field — it’ll have your old plain text token. Edit the file directly:

sudo nano /mnt/tank/vaultwarden/data/config.json

Replace the value with the full Argon2 hash:

"admin_token": "$argon2id$v=19$m=65540,t=3,p=4$YOUR_FULL_HASH_HERE",

Save, restart the Vaultwarden app from the TrueNAS UI. Log into /admin using the password you typed when generating the hash (not the hash itself). The startup logs should no longer show the plain text token warning.

If you get locked out

If you paste a bad hash and can’t access the admin panel, edit config.json again and set admin_token back to a simple string temporarily. The file is the source of truth — what’s in the TrueNAS app config doesn’t matter once config.json exists.

Offsite backups — the nalisios repo

This was the highest-consequence gap in the whole setup. Vaultwarden had real passwords in it with zero offsite backup — a dead IronWolf would mean losing everything. The goal: automated, encrypted, versioned backups to S3, defined entirely as code so the whole thing is reproducible.

Architecture

The backup system lives in a Git repository called nalisios (after the NAS hostname). It has three parts:

  • infra/ — a CDK project (TypeScript) that creates the AWS resources: S3 bucket, IAM user, access key, Secrets Manager secret
  • scripts/ — portable backup and restore scripts that use rclone to sync data to S3
  • nas/ — system scripts that get installed once on the NAS: rclone setup, cron job, Docker DNS fix

The key design decision: rclone syncs to a single S3 prefix per service (not date-stamped copies). S3 versioning preserves previous versions automatically. This means rclone only uploads changed files on each run, which is critical for large services like Immich where a full copy every day would burn through storage. Lifecycle rules on noncurrent versions handle cost — Infrequent Access at 30 days, Glacier at 90 days, expiration at 365 days.

The AWS side

The CDK stack creates:

  • S3 bucket — versioned, server-side encrypted (SSE-S3), all public access blocked, RETAIN on delete so a cdk destroy can’t accidentally nuke the backups
  • IAM user nas-backup — scoped to only PutObject, GetObject, ListBucket, and DeleteObject on this specific bucket. Nothing else.
  • Access key — the secret access key is stored in AWS Secrets Manager (never printed to the console). You retrieve it manually and paste it into the NAS setup.

Deployed from a dev machine:

cd infra && npm ci && npx cdk deploy

The stack outputs the access key ID, bucket name, and the Secrets Manager ARN. To retrieve the secret:

aws secretsmanager get-secret-value \
  --secret-id "arn:aws:secretsmanager:eu-west-1:ACCOUNT:secret:nalisios/nas-backup-secret-key-XXXXX" \
  --region eu-west-1 --query SecretString --output text

All three values go straight into Vaultwarden.

The NAS side

Clone the repo to the home directory on the NAS:

cd /mnt/tank/home/ignacio
git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/nalisios

Run the install script, which prompts for the three AWS values:

cd nalisios
sudo bash nas/install-backup.sh

It installs rclone (if missing), configures an rclone remote called nalisios with the AWS credentials, copies scripts and config to /mnt/tank/nalisios/ (persistent across TrueNAS updates), and sets up a daily cron job at 02:00.

Backup configuration

The config file at /mnt/tank/nalisios/backup.conf lists services and their source paths, one per line. Started with just Vaultwarden:

# service-name    source-path
vaultwarden       /mnt/tank/vaultwarden/data

Adding a new service later (Immich, Nextcloud, etc.) is just adding a line to this file.

Pre-backup hooks

Database services need consistent dumps before file sync — rcloning a live SQLite or Postgres data directory can produce a corrupted backup. The system supports per-service hook scripts: if /mnt/tank/nalisios/hooks/vaultwarden.sh exists and is executable, create-backup.sh runs it before syncing that service. If the hook fails, the service is skipped (don’t sync inconsistent data).

Vaultwarden on TrueNAS uses Postgres 18, not SQLite. The hook dumps the database before each backup:

#!/usr/bin/env bash
set -euo pipefail
CONTAINER="ix-vaultwarden-postgres-1"
PG_USER="vaultwarden"
DUMP_PATH="/mnt/tank/vaultwarden/data/db-dump.sql"

if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER}$"; then
    echo "WARNING: Postgres container $CONTAINER not running, skipping dump"
    exit 0
fi

echo "Dumping Vaultwarden Postgres database..."
docker exec "$CONTAINER" pg_dumpall -U "$PG_USER" > "$DUMP_PATH"
echo "Database dumped to $DUMP_PATH"

To find the correct Postgres username for your setup: sudo docker exec ix-vaultwarden-postgres-1 env | grep POSTGRES_USER.

Verification

First backup ran manually to confirm everything works:

sudo bash scripts/create-backup.sh /mnt/tank/nalisios/backup.conf

The output showed the hook running the Postgres dump, then rclone uploading config.json, the database dump, RSA keys, and icon cache to S3. Running it a second time confirmed only deltas get uploaded (just the db-dump.sql, since its timestamp changed).

Restore test:

sudo bash scripts/restore-backup.sh --service vaultwarden --dest /tmp/restore-test --yes
diff -r /mnt/tank/vaultwarden/data /tmp/restore-test

The diff was clean — only tmp/ (transient runtime data) differed. End-to-end backup pipeline verified.

Cron

The daily backup runs at 02:00 UTC. One thing to watch: the install script originally pointed the log file at /var/log/, which is on the root filesystem that TrueNAS wipes on updates. Moved it to persistent storage:

0 2 * * * BUCKET_NAME=nalisios-backup-071372696451 /mnt/tank/nalisios/scripts/create-backup.sh /mnt/tank/nalisios/config/backup.conf >> /mnt/tank/nalisios/logs/backup.log 2>&1

Created the log directory: mkdir -p /mnt/tank/nalisios/logs.

Docker DNS fix script

The repo also includes nas/fix-docker-dns.sh — a script that writes Cloudflare (1.1.1.1) and Google (8.8.8.8) DNS resolvers into /etc/docker/daemon.json and restarts Docker. This is the persistent solution for the Movistar DNS blocking issue. If a TrueNAS update wipes daemon.json, re-run the script:

sudo bash /mnt/tank/home/ignacio/nalisios/nas/fix-docker-dns.sh

The script backs up the existing file and uses jq to merge the DNS key if other settings are present, rather than overwriting blindly.

Capacity notes

Did some thinking about how many people this setup can realistically serve. The bottleneck order:

  1. Upload bandwidth — divide your ISP’s upload by ~8 Mbps per 1080p stream. 50 Mbps up ≈ 5-6 simultaneous remote streams.
  2. Hardware transcoding — the N355’s QuickSync handles 3-5 simultaneous 1080p transcodes. Direct play doesn’t use the CPU.
  3. Cloudflare Tunnel — a handful of family members is fine. Turning it into a streaming service for 15 people is asking for a ToS conversation.

Three families is the comfortable sweet spot for this hardware and a residential connection. Four is pushing it.

Spanish audio for Dad

Looked into getting dubbed content for my parents, who won’t do subtitles. The approach: custom formats in Radarr and Sonarr that score releases containing terms like spanish, español, castellano, latino, dual, MULTi with a high positive score. This makes the arr stack prefer multi-language releases when available.

For movies this works well — most mainstream releases from Netflix, Amazon, and Disney+ rips include Spanish audio tracks baked into the MKV file. TV shows are harder — weekly episodes tend to be English-only WEB-DLs, and multi-language releases come later as season packs. For popular TV, dubbed versions eventually appear. For niche content, subtitles via Bazarr remain the fallback.

In Jellyfin, set each user’s preferred audio language to Spanish in Dashboard → Users → Playback settings. The Spanish track auto-selects whenever one exists. Not yet configured — parking this for later.

Hardening, tuning, and closing iteration 1

Final session before calling the first iteration done. The NAS is functional — this is about making it robust and filling the gaps that would bite later.

Disable truenas_admin

Credentials → Local Users → truenas_admin → Edit → unchecked Enabled → Save. The account isn’t deleted (internal services may reference it), just disabled so it can’t log in. My ignacio user has full admin privileges and SSH access, so nothing changes operationally.

SMART monitoring

The TrueNAS 24.10 UI doesn’t expose a SMART test scheduling widget on my version — the Data Protection screen only shows backup-related tasks. The S.M.A.R.T. Tests widget that the docs describe simply isn’t there. So this went through SSH instead.

sudo smartctl -i /dev/sda          # confirmed SMART is supported on the IronWolf
sudo smartctl -t short /dev/sda    # ran an immediate short test to verify
sudo smartctl -l selftest /dev/sda # confirmed test completed, no errors

Scheduled tests via smartd.conf:

sudo bash -c 'cat >> /etc/smartd.conf << EOF
/dev/sda -a -o on -S on -s (S/../../7/03|L/../../1/04)
EOF'
sudo systemctl enable smartmontools
sudo systemctl start smartmontools

This runs a short self-test every Sunday at 03:00 and a long self-test on the 1st of every month at 04:00. TrueNAS picks up SMART failures through its alert system and pushes them to Telegram (configured below).

TrueNAS may overwrite smartd.conf

Same caveat as daemon.json — TrueNAS manages system config files and may reset smartd.conf on updates. If SMART tests stop running after an update, re-run the config. Worth adding to the nalisios repo as a setup script alongside the Docker DNS fix.

Alert notifications — Telegram

Email alerts were the original plan, but Zoho Mail’s free tier doesn’t allow SMTP relay from third-party applications — authentication fails regardless of configuration. Rather than setting up Gmail as a workaround, Telegram is a better fit: notifications hit the phone directly and the setup is simpler.

Created a bot via @BotFather on Telegram, retrieved the chat ID from the getUpdates API endpoint, and configured both in TrueNAS under System → Alert Settings → Telegram. Alert level set to Warning — critical and warning events push to Telegram, informational ones stay in the UI.

SNMP Trap was in the alert services list — that’s for enterprise monitoring systems (Nagios, Zabbix). Not relevant here, left it alone.

Spanish dubbed content for Dad

The goal: when a multi-language release exists (containing both original audio and Spanish), prefer it over an English-only release. When it doesn’t exist, grab the English version and let Bazarr handle subtitles.

The key decision was the regex. A broader pattern matching spanish, castellano, latino risks pulling Spanish-only dubs with no original audio track — which would be annoying for anyone who wants to watch in the original language. The safer approach targets only multi-track releases:

Radarr and Sonarr — Settings → Custom Formats → Add:

  • Name: Multi-Language Audio
  • Condition: Release Title → regex: (?i)\b(dual|multi|MULTi)\b
  • Applied to quality profiles with a score of 100, minimum custom format score at 0 (preferred, not required)

Jellyfin — per-user audio preferences:

  • Dad’s account → Playback → Audio Language Preference → Spanish
  • My account and Laura’s → left at default (original track)

Same file, different experience per user. Mainstream content from Netflix/Amazon/Disney+ rips almost always includes Spanish audio in multi/dual releases. Niche content falls back to Bazarr subtitles.

Resource tuning

Ran docker stats and found qBittorrent eating 83% CPU and 12.2 GiB RAM — nearly 40% of total system memory. The cause: no resource limits on the custom compose app, and qBittorrent aggressively caches torrent data in memory.

Container resource limits — added deploy.resources.limits to the compose YAML:

ContainerCPUsMemory
qBittorrent44 GiB
Gluetun2512 MiB

Network speed limits — configured inside qBittorrent’s web UI (Tools → Options → Speed):

  • Global Download Rate Limit: 31,250 KiB/s (~250 Mbps)
  • Global Upload Rate Limit: 19,000 KiB/s (~150 Mbps)

These leave roughly 150 Mbps download and 130 Mbps upload for Jellyfin streams, video calls, and general household use. The connection is 400/280 Mbps.

Jellyfin memory bump — TrueNAS catalog apps default to 4 GiB. Increased Jellyfin’s limit to 8 GiB via Apps → Installed → Jellyfin → Edit → Resources. Not critical for single-user streaming, but gives headroom for multi-user transcoding sessions.

iGPU status — confirmed /dev/dri/ contains card0 and renderD128 (Intel QuickSync available). intel_gpu_top showed 100% RC6 (idle) and 0% GPU utilization, which is correct — nobody was streaming. The iGPU activates on demand when Jellyfin transcodes.

VPN tunnel verification:

sudo docker exec gluetun wget -qO- https://ipinfo.io

Returns a US IP (San Jose, California). Tunnel is healthy, qBittorrent traffic is not touching the Spanish IP.

Rotate WireGuard key if exposed

The WireGuard private key was accidentally pasted into a chat session. Generated a new key from the ProtonVPN dashboard and updated the compose YAML. If this ever happens, rotate immediately — the old key should be considered compromised.

TrueNAS config backup

System → General → Manage Configuration → Download File. Saved offsite. This is the “reinstall and restore in 20 minutes” insurance — contains pool config, users, shares, app settings, everything except the actual data.


Updated checklist

  • Secure the Vaultwarden admin token — done. Argon2 PHC hash written directly to config.json to bypass Docker’s environment variable escaping.
  • Set up domain access — done via Cloudflare Tunnel. Vaultwarden is live at vault.tinkerer.tools.
  • Connect the Bitwarden browser extension — done. Server URL set to https://vault.tinkerer.tools on the pre-login screen. Remember to do the same on the laptop browser extension.
  • Deploy the rest of the media stack — done. Full pipeline working: Prowlarr → Sonarr/Radarr → qBittorrent (via VPN) → Jellyfin.
  • Enable HW transcoding in Jellyfin — done. Intel QuickSync (QSV) enabled in Dashboard → Playback → Transcoding. All decode formats ticked, HEVC encoding on.
  • Make Docker DNS fix persistent — done. Script at nalisios/nas/fix-docker-dns.sh merges DNS config into daemon.json. Re-run after TrueNAS updates.
  • Expose more services via Cloudflare Tunnel — done. Jellyfin at watch.tinkerer.tools, Seerr at request.tinkerer.tools.
  • Storage cleanup — done. qBittorrent seed ratio at 2.0 with “remove torrent and files.” Radarr/Sonarr Remove Completed enabled. Maintainerr deployed for library lifecycle management.
  • Offsite backups — done. Vaultwarden backed up to S3 (eu-west-1) via rclone, versioned, with lifecycle tiering. Automated daily at 02:00 via cron. Restore tested and verified.
  • Disable truenas_admin — done. Account disabled, not deleted.
  • Set up alerts — done via Telegram. Bot configured, warning level. Email alerts dropped (Zoho free tier doesn’t support SMTP relay).
  • Configure SMART tests — done via SSH. smartmontools service enabled. Weekly short, monthly long on the IronWolf.
  • Save TrueNAS config backup — done. Downloaded and stored offsite.
  • Spanish dubbed content — done. Custom format Multi-Language Audio in Radarr/Sonarr prefers dual/multi releases. Jellyfin user audio preference set to Spanish for Dad’s account.
  • Resource tuning — done. qBittorrent capped at 4 cores / 4 GiB, Gluetun at 2 cores / 512 MiB. Network speed limits set. Jellyfin bumped to 8 GiB.