This is my humble homelab playground. Due to limited space in my apartment, I needed something small, quiet, and low wattage that would still yield plenty of power to experiment with. I don't have a huge rack, no fancy Cisco switches here, just a few mini PCs and a powerful dell tower that I tinker with to learn, experiment, self-host, test and troubleshoot things. It's also my space to mess with Proxmox, Kubernetes, Docker, Linux and whatever new shiny tech catches my eye.
Almost all here was bought used and Ebay is my best friend. I like to spend as little as possible, and learn as much as I can on many different platfoms.
I started this as a fun side project during lockdown, and it's kinda grown from there. Now I've got it running backups, a media server, CI/CD pipelines, k8s cluster, monitoring dashboards, logs collections, private VPNs etc. I break it, fix it, and learn something new almost every week. Here's what I'm running...
Homelab networking architecture overview (click to expand)
Heimdall helps keeping track of everything easy, especially when there are more services. I also can add icons, tags & categories for faster navigation. It's a very simple tool but very useful in not having to hunt through bookmarks or remember ports and IP addresses. It's my home page on my browser - everything I run is just a click away.
Most of my homelab apps are deployed either via Proxmox VM or run on my Kubernetes cluster. The number of applications changes over time, but there are several core services I consistently maintain.
The goal is to run everything on Kubernetes. I previously used Docker Compose containers, but I have now migrated them inside Kubernetes wherever it makes sense. This migration process itself has been a good learning experience, especially figuring out how to maintain service availability with no downtime. Real-world stuff like data persistence, secrets management, and rolling updates became much clearer once I had to actually implement them in my own homelab. It's still ongoing, not everything fits perfectly into Kubernetes, but moving more workloads this helps me build skills that I not always touch at work but can bring to work, or translate directly to modern cloud-native environments .
1) Proxmox Cluster - The main reason I started exploring Proxmox virtualization was to set up a home Kubernetes cluster.

I run multiple VMs for various purposes on this cluster.
2) Proxmox Backup Server - nothing specific to say about it, just a great tool for backing up and restoring VMs, so I can experiment without fear. I've already needed it a few times after messing up configs. Totally a lifesaver..
3) TrueNAS - It is currently running on Dell Tower 3630 with four 4TB disks. The initial idea was to have a basic shared network files , where I can store my personal and family stuff. I started with samba running on Ubuntu server. But eventually I decided to have something that would allow me to create and manage storage pools, shares, and backups .
4) GitLab - I used to run Gitlab server on docker-compose. The pros was definetely the speed and job processing time, however one day all of sudden it just crashed and it took me a lot of efforts to retrieve the data back. SIince then I've moved Gitlab to a Proxmox VM, I have backups, snapshots on smb share drive, which makes my life much easier. I also run a GitLab Runner on Kubernetes via Helm, which handles my CI/CD pipelines for deploying apps to the cluster.
I kept losing the code of my apps and artifacts in different folders, so I started managing all of my Kubernetes applications with helm charts and Terraform. This way I can easier manage and deploy my applications, and re-use the code faster if needed. I also use GitOps approach to manage my Kubernetes cluster. I have a separate GitLab repository for my Kubernetes cluster and I use ArgoCD to manage the deployment of my applications.
5) Portainer - I use it for managing my Docker containers, images, networks, and volumes. It is super helpful if I need to clean up old images or troubleshoot a container.
5) AWX - I run AWX on Kubernetes as a pod for running Ansible playbooks. I use some of the playbooks for VMs and physical servers patching, configuration management or timezone adjustment on VMs. I have automated task schedulers that takes care of of my VMs config.
I originally ran AWX on a Red Hat VM, but it was pretty slow jobs would take ages. Moving it to Kubernetes with Helm made a big difference. Now jobs run much faster, and its easier to maintain.
6) Grafana, Prometheus, Alertmanager - I use it for monitoring, alerting and visualization. I do have alert manager configured alerting for when my servers hit high cpu or memory. I originally ran this on docker-compose, but now I deploy the full kube-prometheus-stack on Kubernetes via Helm. Much easier to manage and comes with great default dashboards.
7) Uptime Kuma - Super easy to configure, just pings my services and lets me know if anything's down. I run it on Kubernetes now via Helm. Small tool, but super handy.

8) Tailscale - I use Tailscale VPN with one of the VM on my Proxmox cluster configured as an exit node. This allows me to access my local network from anywhere securely, without needing each individual device to be connected to the VPN, super useful when I'm traveling or working remotely.
9) Memos, Vikunja (I use this for my Kanban To Do tasks) and Filebrowser these three apps are my go-to tools for staying organized, Memos for quick notes, Vikunja for task tracking, and Filebrowser for quick file sharing.


10) ntfy - A simple self-hosted push notification server.
I deploy it on Kubernetes via Helm and use it to get phone notifications when my cron backup jobs finish or fail
-
just a one-liner curl call at the end of each cron entry. I have the ntfy Android app on my phone,
and when I'm away from home I connect via Tailscale to reach my local ntfy instance. Simple and does exactly
what I need.
11) Helm Dashboard - A simple web
UI for managing my Helm releases on Kubernetes. Instead of running helm list and digging through
CLI output, I can see all releases, their status, values, and upgrade history in a browser. Handy for a quick
overview of what's deployed across namespaces.

Networking and security in a homelab is always a bit of a rabbit hole. I started simple just exposing a couple of services on ports, but after a few too many random bots hitting my open ports, I decided to tighten things up.
These days I use Cloudflare as my reverse proxy for anything web-based. It handles my three domains, manages SSL certificates, and listens on local Docker ports. I also run Cloudflare Tunnels to access certain services without needing to open any ports which makes life a lot simpler.
For internal apps, I started experimenting with Cloudflare Zero Trust to lock things down even further with it's policies.
My bastion server is protected with DUO MFA for SSH access. I do use Tailscale when I need to connect remotely.
For secrets and passwords, I run Vaultwarden (a lightweight Bitwarden-compatible server) on Kubernetes via Helm. Syncs with my browsers and phone - makes life so much easier than trying to remember everything.
Other things I've put in place:
Grafana Loki I use to pull in logs from the bastion and key servers, all centralized in Grafana. Helps me spot suspicious activity (and my own mistakes).
I also run Wazuh - an open source security platform for threat detection, integrity monitoring, and log analysis. It runs as a VM on Proxmox with agents on my servers and Kubernetes nodes. Gives me a centralized view of security events across the whole homelab - failed logins, file integrity changes, suspicious processes.

Security is an area I keep learning about.. It's very much "good enough for now, but always improving." One day I'll probably add more IDS/IPS tooling, but for a homelab this setup covers most of what I need.
One challenge with running Kubernetes in a homelab is that there's no cloud provider to hand out external IPs for LoadBalancer services. MetalLB solved that problem for me, it's a bare-metal load balancer that gives services real IPs from a pool I define.
I deployed it via Helm and configured it in Layer 2 mode with an IP range from my home network. Now when I create a LoadBalancer service, MetalLB assigns it an IP automatically, all pods of that service are then accessible via that IP.
My monitoring stack now runs on Kubernetes using kube-prometheus-stack deployed via Helm. It includes Prometheus, Grafana, Alertmanager, and various exporters. I also run node-exporter on my physical hosts and Proxmox nodes to collect system metrics.
The setup alerts me via email when things go wrong, high CPU, memory issues, disk space running low, or services going down. Helm makes it easy to manage and update the whole stack with a single values file.
All my Grafana dashboards are managed through IaC, no manual clicking around. Dashboards are defined in code and deployed automatically, so everything stays consistent and reproducible. I use cAdvisor for containers
Like any homelab, mine is a constant work-in-progress. I'm always trying new tools, breaking things, fixing them (sometimes), and learning along the way. There's plenty running in the background - Reloader for auto-restarting pods on config changes, MySQL, Redis, and probably a few containers I forgot about.
Next on the roadmap is AIOps. I'm thinking about adding centralized log collection with Grafana Loki and Alloy, kind of smart log-based alerting, and eventually a self-hosted AI engine using Ollama with local LLMs for anomaly detection and root-cause analysis. This is the goal, but I'm not there yet. I will share the progress in the upcoming posts.