This is my humble homelab playground. Due to limited space in my apartment, I needed something small, quiet, and low wattage that would still yield plenty of power to experiment with. I don't have a huge rack, no fancy Cisco switches here, just a few mini PCs and a powerful dell tower that I tinker with to learn, experiment, self-host, test and troubleshoot things. It's also my space to mess with Proxmox, Kubernetes, Docker, Linux and whatever new shiny tech catches my eye.
Almost all here was bought used and Ebay is my best friend. I like to spend as little as possible, and learn as much as I can on many different platfoms.
I started this as a fun side project during lockdown, and it's kinda grown from there. Now I've got it running backups, a media server, CI/CD pipelines, k8s cluster, monitoring dashboards, logs collections, private VPNs etc. I break it, fix it, and learn something new almost every week. Here's what I'm running...
Heimdall helps keeping track of everything easy, especially when there are more services. I also can add icons, tags & categories for faster navigation. It's a very simple tool but very useful in not having to hunt through bookmarks or remember ports and IP addresses. It's my home page on my browser - everything I run is just a click away.
I run Heimdall using Docker Compose. Here's my compose file:
version: '3'
services:
heimdall:
image: ghcr.io/linuxserver/heimdall:latest
container_name: heimdall
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
volumes:
- ./config:/config
ports:
- 80:80
restart: unless-stopped
Most of my homelab apps are deployed either via Docker Compose, Proxmox VM or run on my Kubernetes cluster. The number of applications changes over time, but there are several core services I consistently maintain.
The future goal is to run everything on Kubernetes. I started contenirizing some of my apps, and I first used Docker-compose containers, and now started testing If I can run them inside Kubernetes wherever it makes sense. This migration process itself has been a good learning experience, especially figuring out how to maintain service availability with no downtime. Real-world stuff like data persistence, secrets management, and rolling updates became much clearer once I had to actually implement them in my own homelab. It's still ongoing, not everything fits perfectly into Kubernetes, but moving more workloads this helps me build skills that I not always touch at work but can bring to work, or translate directly to modern cloud-native environments .
1) Proxmox Cluster - The main reason I started exploring Proxmox virtualization was to set up a home Kubernetes cluster.
I run multiple VMs for various purposes on this cluster.
2) Proxmox Backup Server - nothing specific to say about it, just a great tool for backing up and restoring VMs, so I can experiment without fear. I've already needed it a few times after messing up configs. Totally a lifesaver..
3) TrueNAS - It is currently running on Dell Tower 3630 with four 4TB disks. The initial idea was to have a basic shared network files , where I can store my personal and family stuff. I started with samba running on Ubuntu server. But eventually I decided to have something that would allow me to create and manage storage pools, shares, and backups .
4) GitLab - I run Gitlab server on docker-compose.
I learned the hard way that GitLab really likes RAM. It happily eats 8GB of RAM when it's idle. I first tried running it on a Proxmox VM, but it felt sluggish, moving it to Docker Compose helped a lot.
version: '3'
services:
traefik:
image: traefik:v2.9
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- 80:80
- 443:443
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/config.yml:/config.yml:ro
- ./data/acme.json:/acme.json
networks:
- proxy
labels:
- "traefik.enable=true"
# Comment: Dashboard configuration
- "traefik.http.routers.dashboard.rule=Host(`traefik.domain.com`)"
- "traefik.http.routers.dashboard.service=api@internal"
5) Portainer - I use it for managing my Docker containers, images, networks, and volumes. It is super helpful if I need to clean up old images or troubleshoot a container.
5) AWX - I run AWX on Kubernetes as a pod for running Ansible playbooks. I use some of the playbooks for VMs and physical servers patching, configuration management or timezone adjustment on VMs. I have automated task schedulers that takes care of of my VMs config.
I originally ran AWX on a Red Hat VM, but it was pretty slow jobs would take ages. Moving it to Kubernetes with Helm made a big difference. Now jobs run much faster, and its easier to maintain.
6) Grafana, Prometheus, Alertmanager - I use it for monitoring, alerting and visualization. I do have alert manager configured alerting for when my servers hit high cpu or memory. The configuration is all in the docker-compose yaml.
7) Uptime Kuma - Super easy to configure ,it basically just pings my services/URL's and lets me know if anything's down. A small tool, but super handy.
8) Tailscale - I use Tailscale VPN with one of the VM on my Proxmox cluster configured as an exit node. This allows me to access my local network from anywhere securely, without needing each individual device to be connected to the VPN, super useful when I'm traveling or working remotely.
9) Memos, Vikunja (I use this for my Kanban To Do tasks) and Filebrowser these three apps are my go-to tools for staying organized, Memos for quick notes, Vikunja for task tracking, and Filebrowser for quick file sharing.
Networking and security in a homelab is always a bit of a rabbit hole. I started simple just exposing a couple of services on ports, but after a few too many random bots hitting my open ports, I decided to tighten things up.
These days I use Cloudflare as my reverse proxy for anything web-based. It handles my three domains, manages SSL certificates, and listens on local Docker ports. I also run Cloudflare Tunnels to access certain services without needing to open any ports which makes life a lot simpler.
For internal apps, I started experimenting with Cloudflare Zero Trust to lock things down even further with it's policies.
My bastion server is protected with DUO MFA for SSH access. I do use Tailscale when I need to connect remotely.
For secrets and passwords, I run Bitwarden in a Docker-compose container, which syncs with my browsers and phone makes life so much easier than trying to remember everything...
Other things I've put in place:
I also run Grafana Loki to pull in logs from the bastion and key servers, all centralized in Grafana. Helps me spot suspicious activity (and my own mistakes).
Honestly, security is an area I keep learning about. It's very much "good enough for now, but always improving." One day I'll probably add more IDS/IPS tooling, but for a homelab this setup covers most of what I need.
My monitoring stack runs on Docker Compose and includes Prometheus, Grafana, Node Exporter, cAdvisor, and Alert Manager. I use Node Exporter to collect system metrics from my hosts and cAdvisor for container metrics. Process Exporter helps me track specific processes I want to monitor.
The setup alerts me via email when things go wrong - like high CPU usage, memory issues, or if any of my key services go down. Everything's connected through a dedicated monitoring network, and I use Redis for caching to improve performance.
version: '3.8'
services:
prometheus:
image: prom/prometheus
container_name: prometheus
volumes:
- ./prometheus:/etc/prometheus
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
restart: unless-stopped
networks:
- monitoring
node_exporter:
image: prom/node-exporter
container_name: node_exporter
ports:
- 9100:9100
volumes:
- /:/rootfs:ro
- /proc:/host/proc:ro
- /sys:/host/sys:ro
command:
- '--path.rootfs=/rootfs'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
restart: unless-stopped
networks:
- monitoring
alertmanager:
image: prom/alertmanager
container_name: alertmanager
volumes:
- ./alertmanager/:/etc/alertmanager/
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
ports:
- 9093:9093
restart: unless-stopped
networks:
- monitoring
grafana:
image: grafana/grafana
container_name: grafana
ports:
- 3000:3000
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_PASSWORD=xxxxx
- GF_SMTP_ENABLED=true
- GF_SMTP_HOST=smtp.gmail.com:587
- GF_SMTP_USER=xxxxxx@gmail.com
- GF_SMTP_PASSWORD=xxxxxxxxx
- GF_SMTP_FROM_ADDRESS=xxxxxx@gmail.com
- GF_SMTP_SKIP_VERIFY=true
- GF_SMTP_TLS_MODE=force_starttls
restart: unless-stopped
networks:
- monitoring
process_exporter:
image: ncabatoff/process-exporter:latest
container_name: process_exporter
pid: "host"
network_mode: "host"
restart: unless-stopped
volumes:
- /proc:/host_proc:ro
- /etc/passwd:/etc/passwd:ro
- ./process-exporter:/config
command:
- '--procfs=/host_proc'
- '--config.path=/config/process-exporter.yml'
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: unless-stopped
ports:
- 8083:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
depends_on:
- redis
networks:
- monitoring
redis:
image: redis:latest
container_name: redis
ports:
- 6379:6379
networks:
- monitoring
volumes:
prometheus_data:
grafana_data:
networks:
monitoring:
driver: bridge
My Grafana dashboard showing system metrics and container stats (I use cAdvisor for containers)
Like any homelab, mine is a constant work-in-progress. I'm always trying new tools, breaking things, fixing them (sometimes), and learning along the way. There's plenty more running in the background, MySQL, Redis, a few Kubernetes-native apps, and probably a couple of containers I forgot about..