Over the past two weeks, I have been revisiting the mystical realm of Kubernetes, this technology has fascinated me for years. I have followed simple guides in the past that provided mostly automated scripts to quickly deploy a cluster and ran a few kubectl commands to deploy simple applications. In this series of posts, I would like to share my journey of setting up a GitOps-managed(more on this in a bit!) k3s cluster with separate staging and production environments. If you’re looking to get started with a similar setup, I will walk you through some of my architecture decisions, implementation details, and any lessons I learned along the way.

Why k3s and GitOps

Before diving into the technical details, let’s take a look at why I chose k3s and GitOps for my Kubernetes Home Lab. I chose k3s for a few reasons, it is rather quick and simple to deploy, with lots of great documentation out there. It is very lightweight, allowing it to run on my limited resources, and it maintains full compatibility with standard Kubernetes. This makes it great for my learning environment, where I want to play with all of the production grade capabilities, but don’t necessarily need the full horsepower of a production environment, that will be the next project!

As for GitOps, there is just something about a declarative workflow. I have long been a fan of NixOS(You’re welcome for this rabbit hole, see you next year.) for this very reason. Instead of running a handful of kubectl commands, I can type out a manifest of how I want the service to look, push that change to my GitHub and watch the magic happen. By using GitOps principles, every change to our infrastructure is tracked, reviewed, and automatically applied. This gives us an audit trail, easier rollbacks, and a single source of truth for our cluster’s desired state.

Cluster Architecture

Let’s start with my hardware and network setup. Our cluster consists of three nodes:

Control Plane

Worker Nodes

I have two identical worker nodes(VMs), running on my ProxMox host. Each configured with:

Network Configuration

I chose to keep the networking simple and straight forward, using the k3s defaults:

GitOps Implementation

The heart of the setup is my GitOps repository structure. I have organized my code to clearly separate different concerns and environments:

*Check out my setup at my GitHub https://github.com/we-r-robots/k3s-gitops

k3s-gitops/
├── apps/                    # Application deployments
│   ├── base/               # Base configurations
│   ├── production/         # Production-specific configs
│   └── staging/           # Staging-specific configs
├── clusters/               # Flux-specific configurations
│   ├── production/
│   └── staging/
└── infrastructure/         # Cluster infrastructure

This structure follows best practices, allowing me to:

I am using FluxCD to manage my cluster state. Each environment has its own set of Kustomizations(Manifests):

All Kustomizations currently run on a 10-minute sync interval with pruning enabled, ensuring the cluster state stays in sync with the Git repository.

Secret Management

A critical aspect of any environment is secrets management. In this case I have decided to implement a project called External Secrets Operator (ESO) to handle this:

This setup allows me to:

Current State and Features

Environment Separation

Infrastructure Components

GitOps Workflow

Lessons Learned

Throughout this implementation, I have learned valuable lessons:

  1. K.I.S.S. - This motto rings true for me time and time again. I have a tendency, to reach for the end goal before I have even started. Sticking with a simple, easy to deploy k3s cluster allowed me to get started, use my existing equipment, and actually start learning!
  2. Infrastructure as Code is my spirit animal. It is hard to describe just how satisfying it is to define some blocks of code and then watch all the pieces fall into place to match that desired state. It also makes tracking version history and changes so much easier. Also, having my entire cluster setup synced to my Github allows me to recreate this cluster again and again.(which means I can break things!)
  3. Finally I am reminded of a tired old sign that used to hang on the wall at a previous employer. The 5 P’s “Proper Planning Prevents Poor Performance” I used to chuckle at that sign, mainly because it ironically had a typo in it, but the philosophy has proven valuable. Taking the time to setup Proper separation of Staging and Production, implementing a robust, scalable, and secure system for managing secrets, among other foundational elements has set this project up for success.

What’s Next?

Monitoring Setup

Database

Security Enhancements

CI/CD Pipeline

Documentation

Conclusion

Building a k3s cluster and learning about GitOps workflows and best practices has been really fun and challenging, so far. Even though I haven’t deployed a single application, aside from the recommended podinfo web app to verify my cluster was actually working; it feels really good to take it slow, trudge through mountains of documentation, and build a solid foundation for what is to come. The more I dive into declarative workflows like GitOps, the more enamored I become with it. I cannot wait to start on the next steps.

Stay tuned for future posts in this Kubernetes series, I plan to do a deep dive on the specifics of my FluxCD setup, how I connected ESO to my AWS Secrets Manager, and specifics for the upcoming projects like DB management, Monitoring Stacks, and CI/CD pipeline development. Thanks for coming along on this journey with me.