Building a Proper Dev VM on the Homelab

How I stopped fighting browser-based IDEs and built an SSH-native dev playground on TrueNAS — plus a cross-device Claude Code config that travels with me.

I’d been trying two browser-based paths for personal dev on the homelab — CloudCLI (VS Code in a browser tab) and Paperclip (multi-agent orchestration). Neither felt right. CloudCLI was fine but alien; Paperclip was overkill for “open an editor and write code.” SSHing into the NAS worked best, but I didn’t want to pollute the base TrueNAS install with Vercel, Claude Code, SAM, and every other dev tool I touch.

The answer: a dedicated Ubuntu 24.04 VM named lab — 4 vCPU, 8 GB RAM, 40 GB VirtIO disk — on TrueNAS SCALE’s Virtual Machines screen. SSH in. Install everything. NFS-mount /mnt/tank/projects from the NAS. Use it as my personal playground.

Architecture

┌────────────────────┐        ┌──────────────────────┐
│  NAS (nalisios)    │        │  VM (lab)            │
│  TrueNAS SCALE     │        │  Ubuntu 24.04        │
│                    │        │                      │
│  /mnt/tank/        │        │  /projects  ←────────┼── NFS mount
│    ├── projects/   │────────│  ~/.ssh/codecommit   │   over Tailscale
│    ├── iso/        │        │  Docker, Node, SAM,  │
│    └── vms/lab     │        │  Claude Code, etc.   │
│                    │        │                      │
│  Tailscale (app)   │────────│  Tailscale (native)  │
└────────────────────┘        └──────────────────────┘
       │                              │
       └──────── tailnet ─────────────┘

Key decisions

VM vs LXC container

LXC is lighter and boots in seconds, but I need Docker inside for sam local invoke. Docker-in-LXC works but requires nesting flags and occasional wrestling. Docker-in-VM just works. Spent the extra RAM.

Tailscale over bridge networking

TrueNAS’s default VM networking uses macvtap, which has a well-documented limitation: the host and VM can’t talk to each other, even though both can talk to the rest of the LAN. The “proper” fix is creating a bridge interface, but that’s a recoverable-but-annoying operation with real lockout risk. Tailscale sidesteps the whole thing — both ends get tailnet IPs and talk as if they’re on the same network. Bonus: works from the work laptop too.

SSH keys over AWS credential helper

Previously used the AWS credential helper for CodeCommit. Pivoted to SSH keys because a real key at ~/.ssh/codecommit (RSA, not ed25519 — IAM doesn’t accept ed25519) is cleaner and doesn’t require aws configure state to work.

claude-config as its own repo

Started as a subdir of nalisios (my NAS config repo) but the lifecycles diverge: NAS config changes when I add Docker services; claude-config changes when I add skills. Lives at ~/claude-config, not /projects/ — it’s personal infrastructure, not a project.

Gotchas

TrueNAS wants datasets, not directories. Tried to store the VM disk in /mnt/tank/vms/ — a plain dir, not a dataset. Got [EINVAL] attributes.path: The path must be a dataset or a directory within a dataset. Same for the ISO location. Create proper datasets via the UI before pointing the VM at them.

The wrong NIC picks itself. My NAS has two NICs — enp3s0 (disconnected, no cable) and enp4s0 (the live one). TrueNAS’s VM wizard listed both with no status indication. Picked enp3s0 first. DHCP failed silently, but nothing in the installer said “hey, your NIC has no carrier.” Tear down, rebuild with enp4s0.

zoxide needs ~/.local/bin in PATH. Ubuntu auto-adds it via .profile for bash, but zsh doesn’t source .profile. The zoxide installer drops the binary in ~/.local/bin, then .zshrc tries to eval "$(zoxide init zsh)" and fails because zoxide isn’t on PATH. Fix: prepend export PATH="$HOME/.local/bin:$PATH" to the zsh init block.

The atuin installer has an interactive fallback. The original install line had || bash <(curl ...) that drops into an interactive prompt — which hangs forever in a non-TTY context. Use curl -LsSf ... | sh directly instead.

Browser terminals can’t load Nerd Fonts. Cloudflare’s ssh.tinkerer.tools is great from the work laptop, but it uses whatever font the browser picks — no way to inject JetBrainsMono Nerd Font, so Starship icons render as tofu. Treat the browser SSH as an escape hatch; use a real terminal with Nerd Fonts on the Mac for daily dev.

The scripts

Three scripts in ~/nalisios/lab/, all idempotent:

bootstrap.sh — Run on the NAS to prep a new VM. Copies ~/.ssh/codecommit + ~/.ssh/config to the VM and tests CodeCommit auth.

provision.sh — Run on the VM. Installs Docker, Node (via nvm), Claude Code, Vercel CLI, AWS CLI v2, SAM CLI, pyenv + Python, oh-my-zsh. Mounts /projects via NFS.

zsh-setup.sh — Run on the VM after provision. Installs Starship, zsh-autosuggestions, zsh-syntax-highlighting, eza, bat, fd, zoxide, atuin, direnv. Writes a starship.toml.

The claude-config repo

Separate repo at ~/claude-config on every device. Symlinks skills/commands/agents into ~/.claude/; imports MCP servers via claude mcp add-json --scope user. Pulling the repo on a new machine and running ./install.sh gives me the same skill set, same commands, same MCP seeds (filesystem + playwright) everywhere.

claude-config/
├── CLAUDE.md               # global memory
├── skills/                 # structural-editor, investigate, refine, balance-quarter, job-application
├── commands/
├── agents/
├── mcp-servers.json
└── install.sh

The thing I got wrong at the start was conflating “SSH-native” with “install everything on the NAS.” They aren’t the same. A VM gives me the SSH-native feel without polluting the base install, and a shared-config repo means no one device I work from has to be precious.