Skip to main content

Proxmox

One MCP Server to Rule Them All: Unifying 9 Homelab Services

The Problem: Six Interfaces for One Question # “Is anything broken in my homelab?” Answering that question used to mean: SSH into Proxmox to check guest status. Curl the Pi-hole API for DNS health. Open Grafana to scan Prometheus alerts. Check Graylog for error spikes. Look at Semaphore for failed automation runs. Glance at Caddy logs for 502s.

Architecture: Prometheus + Grafana on a Dedicated LXC

Overview # Migrated the Prometheus + Grafana monitoring stack from a shared Docker VM to a dedicated LXC container. The shared VM hosted multiple stacks (pgAdmin, Portainer, monitoring) which created resource contention and made lifecycle management messy. Moving monitoring to its own LXC follows the homelab pattern of one service per container for cleaner isolation, backups, and management.

Semaphore Proxmox Power Management Automation

What Changed # Added Ansible playbooks to Semaphore for automated Proxmox cluster power management: Night Sleep: Gracefully shuts down non-essential VMs/LXCs at night Day On: Wakes up the cluster in the morning Scheduled via Semaphore cron Why # Running all VMs 24/7 wastes power when they’re not needed. Automated scheduling reduces energy costs and wear on hardware.

Infrastructure

My homelab runs on a 4-node Proxmox VE cluster hosting 50+ LXC containers and VMs. This wiki documents the architecture, conventions, and lessons learned. Proxmox Cluster Architecture # Cluster Specifications # Node Storage Type CPU RAM Primary Workloads Node 2 ssd-data LVM-thin 4 cores 16 GB PBS, Development Node 3 zdata ZFS 4 cores 32 GB Databases, DNS-Primary Node 5 ssd-data LVM-thin 4 cores 16 GB Graylog VM, DNS-Secondary Node 6 zdata ZFS 4 cores 32 GB Docker-Main, HA services Total Resources: