Self-Hosted Media Pipeline
How I replaced paid streaming services with an automated, self-hosted media pipeline on a TrueNAS box.
The goal of my homelab journey was to stop paying streaming services that keep fragmenting content across apps, and replace them with something where anyone in the family browses what’s trending, clicks a button, and watches. A few weekends of debugging later, it mostly works as advertised.
The stack
Decisions
Jellyfin over Plex
I started with Plex in mind and switched after some tests. The reason: hardware transcoding. My NAS has an Intel N355 with QuickSync, which can handle multiple simultaneous 1080p transcodes. Plex locks that behind Plex Pass. Jellyfin does it for free.
Plex’s one killer feature was Watchlist integration with Sonarr/Radarr. Seerr replaced that. Jellyfin is also fully self-hosted — no “sign in to watch your own files.”
The VPN decision
Torrenting without a VPN on a Spanish ISP is not optional. Mullvad looked perfect at first — €5/month, no account, trivial Gluetun setup. But Mullvad removed port forwarding in July 2023. Without it, seeding tanks because peers can’t reach you; on private trackers it’s a non-starter.
ProtonVPN paid ended up being the right answer: P2P with port forwarding via NAT-PMP, native Gluetun support, and €2.99/month on the 24-month plan.
How it runs
Everything lives under one /mnt/tank/media tree on a single ZFS dataset. Movies, TV, anime, and downloads share the pool — so when Sonarr or Radarr “moves” a finished download to the library, it’s an instant hardlink, not a cross-drive copy.
Gluetun and qBittorrent are deployed together via Docker Compose because qBittorrent shares Gluetun’s network stack. The rest (Jellyfin, Sonarr, Radarr, Bazarr, Prowlarr, Seerr) come from the TrueNAS app catalog.
Two details that aren’t obvious from the compose file:
UID 568 everywhere. TrueNAS catalog apps run as 568. Any compose container sharing files with them needs to match: one chown -R 568:568, PUID/PGID set to 568, done.
Movistar DNS blocking. The Spanish ISP blocks torrent site domains at the DNS level, and Docker forwards DNS to the host by default — so Prowlarr can’t resolve indexers. Fix: merge "dns": ["1.1.1.1", "8.8.8.8"] into /etc/docker/daemon.json. TrueNAS may overwrite it on updates, so a script in the backup repo reapplies it.
The compose file
services:
gluetun:
cap_add:
- NET_ADMIN
container_name: gluetun
deploy:
resources:
limits:
cpus: '2'
memory: 512M
devices:
- /dev/net/tun:/dev/net/tun
environment:
PORT_FORWARD_ONLY: 'on'
SERVER_COUNTRIES: United States
TZ: Atlantic/Canary
VPN_PORT_FORWARDING: 'on'
VPN_PORT_FORWARDING_DOWN_COMMAND: >-
/bin/sh -c 'wget -O- -nv --retry-connrefused --waitretry=1 --tries=15
--post-data "json={\"listen_port\":0}"
http://127.0.0.1:8080/api/v2/app/setPreferences'
VPN_PORT_FORWARDING_UP_COMMAND: >-
/bin/sh -c 'wget -O- -nv --retry-connrefused --waitretry=1 --tries=15
--post-data
"json={\"listen_port\":{{PORT}},\"random_port\":false,\"upnp\":false}"
http://127.0.0.1:8080/api/v2/app/setPreferences'
VPN_SERVICE_PROVIDER: protonvpn
VPN_TYPE: wireguard
WIREGUARD_PRIVATE_KEY: <private_key>
image: >-
qmcgaw/gluetun@sha256:b45facb7490584f6172dc77027d773bf8cf1d0d8f3b6488093f9364735f149e8
ports:
- '8080:8080'
restart: unless-stopped
volumes:
- /mnt/tank/apps/gluetun:/gluetun
qbittorrent:
container_name: qbittorrent
depends_on:
- gluetun
deploy:
resources:
limits:
cpus: '4'
memory: 4G
environment:
PGID: '568'
PUID: '568'
TZ: Atlantic/Canary
WEBUI_PORT: '8080'
image: lscr.io/linuxserver/qbittorrent:5.1.4-r2-ls444
network_mode: service:gluetun
restart: unless-stopped
volumes:
- /mnt/tank/apps/qbittorrent:/config
- /mnt/tank/media/:/media
network_mode: service:gluetun is the important line: qBittorrent has no network interface of its own, so all its traffic goes through the VPN tunnel.
Debugging “slow downloads”
“Slow downloads” turned out to be three separate problems stacked on each other. Fixing one exposed the next.
The tunnel wouldn’t stay up. 1 MiB/s speeds, and Gluetun logs showed OpenVPN flapping in a loop. Switching to WireGuard fixed it cleanly.
Gluetun couldn’t tell qBittorrent about the forwarded port. Tunnel stable, port assigned, but qBittorrent never got it. The breakthrough was testing the API manually from inside the Gluetun container — first a sanity probe, then the real call:
wget -S -O- http://127.0.0.1:8080/api/v2/app/version
wget -S -O- --post-data 'json={"listen_port":39662,"random_port":false,"upnp":false}' \
http://127.0.0.1:8080/api/v2/app/setPreferences
Both returned 200 OK, proving the issue was the payload, not auth or connectivity. Three bugs in one command: {{PORTS}} where qBittorrent expected {{PORT}}; an extra current_network_interface field that got silently rejected; and a YAML parsing issue from the colons in the URL, fixed by using the environment map form.
qBittorrent requires authentication by default, even from localhost. Enable Bypass authentication for clients on localhost in the Web UI settings, or Gluetun’s port-forwarding command can’t reach the API.
Movies downloaded but didn’t appear. Radarr logs: Import failed, path does not exist. qBittorrent saw files at /downloads/..., but that path didn’t exist inside Radarr’s container — different volume mounts, same files on disk, invisible to each other. Fix: mount the full /mnt/tank/media tree as /media in both containers, set qBittorrent’s save path to /media/downloads, Radarr’s root folder to /media/movies.
qBittorrent then ate the NAS. Found it at 83% CPU and 12 GiB RAM. Compose now caps it at 4 CPUs / 4 GiB, Gluetun at 2 CPUs / 512 MiB, Jellyfin at 8 GiB for multi-user transcoding headroom. In-app speed limits leave ~150/130 Mbps headroom on the 400/280 Mbps line.
The autonomous loop
This is where it stops feeling like a collection of containers and starts feeling like a product.
Seerr gives everyone a Netflix-like UI to browse and request. It connects to Radarr and Sonarr with “Automatic Search” enabled, so a request immediately triggers the full pipeline:
Someone opens Seerr
→ browses trending, searches for a movie
→ clicks "Request"
→ Seerr sends it to Radarr
→ Radarr searches indexers via Prowlarr
→ qBittorrent downloads through VPN
→ file lands in /media/movies
→ Bazarr grabs subtitles
→ Jellyfin picks it up
→ they watch
Zero intervention from me. Each Jellyfin user has their own watch history, continue watching, favourites, and library visibility — and those accounts carry over to Seerr, so everyone logs in once across both. Both are exposed via Cloudflare Tunnel — they work from anywhere without a VPN.
The polish layer
Subtitles
Bazarr ships with no language profile and does nothing until you create one. Mine tries Spanish and English. Five providers need zero config beyond toggling them on: Podnapisi, YIFY, Animetosho, Jimaku, SubX. OpenSubtitles.com needs a free account. Skip Addic7ed (paid Anti-Captcha), old Subdivx (removed), and Subscene (broken).
Spanish audio for Dad
My parents won’t do subtitles. The approach: a custom format in Radarr/Sonarr called Multi-Language Audio with regex (?i)\b(dual|multi|MULTi)\b, scored 100 with minimum 0. The arr stack prefers multi-language releases but doesn’t require them.
A broader regex matching spanish or castellano risks pulling Spanish-only dubs with no original audio. Targeting dual/multi grabs releases that include both.
In Jellyfin, each user has an audio preference. Dad’s account is set to Spanish — the Spanish track auto-selects whenever one exists. Same file, different experience per user.
Storage lifecycle
Two mechanisms keep disk from filling up. qBittorrent hits its seed ratio (2.0) or time limit, then auto-deletes the torrent and its files — by then Radarr/Sonarr have already hardlinked into the library, so what’s deleted is a duplicate.
For the library itself, Maintainerr identifies old unwatched content, shows it in a “Leaving Soon” collection on the Jellyfin home screen, and deletes after a grace period. Rules like “movie older than 90 days and not watched by any user” keep the library bounded without surprise deletions.
Capacity
How many people this realistically serves — the bottleneck order:
- Upload bandwidth — divide ISP upload by ~8 Mbps per 1080p stream. 50 Mbps up ≈ 5–6 simultaneous remote streams.
- Hardware transcoding — the N355’s QuickSync handles 3–5 simultaneous 1080p transcodes. Direct play doesn’t touch the CPU.
- Cloudflare Tunnel — a handful of family is fine. Fifteen people is asking for a ToS conversation.
Three or four families is the comfortable sweet spot on this hardware and a residential connection.