Lachlan Cox

What started as a Plex server has slowly grown into a full home infrastructure setup. I also use it as a testing ground for work - we run Proxmox internally, so having my own cluster to break helps me break theirs less often.

Network Diagram

Homelab Network Architecture

Hardware

Networking

DeviceRole
UDM-ProRouter, firewall, NVR
USW-Aggregation10Gbe backbone
USW-Pro-Max-24Main switch, Proxmox connectivity
USW-FlexCamera switch
USW-Lite-8-PoE (x2)Study and living room
U6-ProPrimary AP
U6-Mesh (x2)Mesh APs for study and living room
Unifi Door Hub MiniGarage door control

Compute (Homelab Proxmox Cluster)

Three micro PCs form a Proxmox cluster with ZFS storage across 12TB of NVMe. This cluster handles all home services but I also use it for work testing when I need more resources or want to test HA configurations.

NodeHardwareCPURAMStorageNIC
pve-amberLenovo M70q Gen 6Ultra 7 265T (20t)64GB512GB + 4TB NVMe1Gbe + 2.5Gbe
pve-lagerLenovo M70q Gen 6Ultra 7 265T (20t)64GB512GB + 4TB NVMe1Gbe + 2.5Gbe
pve-porterLenovo M70q Gen 6Ultra 7 265T (20t)64GB512GB + 4TB NVMe1Gbe + 2.5Gbe

Each node has the built-in 1Gbe NIC plus a 2.5Gbe NIC added in place of the WiFi card. Boot drives hold ISOs and CT templates; the 4TB drives form a ZFS storage disk.

These sit in my server rack as 2x 1U mounts that hold 2 PCs each. I’m contemplating building a custom power supply to power the 3 nodes and fit it in the spare holder in the mounts. This should help clean up some cables and probably help with reduced power usage.

Storage

DeviceConfigCapacityPurpose
UNAS-Pro (x2)7x 10TB RAID 5 + hotspare~50TB eachMedia, backups

Power

2x CyberPower OR1000ERM1U (1000VA/600W) protecting the core infrastructure.

Network

Internal services run on *.home.lachlancox.dev, resolved by the UDM-Pro’s internal DNS. Public services are exposed through a reverse proxy on my main domain.

VLANs

VLANNameSubnetPurpose
1Management192.168.1.0/24Infrastructure hardware
2Internal Users10.10.20.0/24WiFi clients
20Infrastructure Services192.168.30.0/24Proxmox, infra services
21Internal Services192.168.32.0/24Internal-only services
22Public Services192.168.33.0/24Internet-exposed services

Services

ServiceTypeVLANDescription
infra-proxyLXC20Caddy reverse proxy
infra-authLXC20Authelia for SSO
svc-plexLXC22Plex media server
svc-headscaleLXC22Self-hosted Tailscale control server
svc-tandoorLXC21Recipe management
svc-actualLXC21Actual Budget
svc-mediaVM21The arr stack (Docker)
svc-uptimeLXC21Uptime Kuma

Note: Metrics and visibility are basically non-existent right now. Planning to add Grafana for dashboards at some point.

Work (Separate Proxmox Cluster)

A dedicated single node cluster strictly for work.

NodeHardwareCPURAMStorageNIC
pve-supermicroSupermicro A+ Server E301-9D-8CN4AMD EPYC 3251 SoC (8c/16t)128GB128GB + 4TB NVMe1Gbe mgmt + 1Gbe

The Supermicro is normally a 1.5U chassis but I custom fit it into a 1U for my rack.

Naming Convention

Proxmox nodes are named after beer styles: pve-{beer} (e.g., pve-amber, pve-lager, pve-porter).

Services follow a prefix convention:

Notes: