The Last Stack You'll Ever Build
Stop typing syntax and start building Cathedrals. Today I built a 3-node HA K3s cluster using OpenTofu and Ansible—orchestrated entirely by AI as my workforce. In 2026, the value is in the vision, not the boilerplate. #Vibecoding #K3s #DevOps
The Last Stack You'll Ever Build: Vibecoding Your Way Through the 2025 Interview
December 29, 2025
I've been pondering lately. In the Age Of AI, how does a company hire a new infrastructure specialist?
In the olden days (circa 2023), you'd invite them to your office, give them some coffee out of your chipped company-branded mugs, and ask them to live-code some Terraform and Ansible. Maybe you'd ask them to create something in advance—a "take-home test"—and let them demo a full LAMP stack or a Kubernetes cluster on their laptop.
But in the age of AI, that entire take-home test is just one prompt away.
If AI is just rehashing what's already available on the internet, then the value of a engineer isn't in typing the syntax anymore. It's in the vision. So the idea was born: Build the last stack I'll ever build. Use the tools that are currently top of the line, but build it AI-first. Fully vibecoded.
This repository is the result of that experiment. It's how I got from a "stupid manifest" to a set of prompts, to design documents, to a full-on automated K3s cluster with ArgoCD on Libvirt/KVM—all in about 4 coffees' worth of time.
Here is how it went down.
1. The Vibe (The Manifesto)
Before a single line of code was generated, I needed to define the soul of the infrastructure. AI is a great worker, but it needs a strong leader. If you ask for "a server," it gives you a server. If you ask for a "Cathedral of Steel," it builds you a fortress.
I started with a Manifesto.
The Philosophy: Order in the Storm
Entropy is the enemy. The digital ocean is corrosive... We do not build "servers." We build Cathedrals of Steel.
We do not "patch holes." We reinforce the hull.
This wasn't just flair; it was the prompt context. It told the AI that "good enough" wasn't acceptable. We aren't hacking together a dev box; we are building a ship to survive the ocean.
2. The Contract (ADRs)
In the past, I'd keep the architecture in my head. But when your junior engineer is an LLM, you need to be explicit. I wrote Architectural Decision Records (ADRs) to lock in the constraints.
From ADR 001: AI-Assisted Development Workflow:
The Protocol:Prompts as Code: We do not type random questions into chat windows. We create reusable, version-controlled system prompts.The "Intern" Model: The AI is treated as a tireless Junior Engineer. It does not make architectural decisions; it executes them based on strict instructions.
We defined the rules of engagement before writing the code.
3. The Workforce (Prompts)
I didn't just paste code into ChatGPT. I built a team of specialized agents. I defined "Personas" in the .ai/prompts/ directory to switch the AI's context.
When I needed Terraform, I didn't ask a generalist. I summoned the Terraform Architect:
# Persona: Terraform Architect
You are an OpenTofu (Terraform) expert. Your focus is on building modular, scalable, and explicit infrastructure code.
## Principles
1. **Modularity:** Break infrastructure into reusable modules.
2. **Explicit Dependencies:** Use `depends_on` only when necessary...
3. **OpenTofu Standards:** Adhere to modern OpenTofu practices and syntax.
By constraining the model, I ensured the output wasn't just functional—it was idiomatic.
4. Vibecoding the Stack
With the philosophy, the rules, and the workforce in place, the actual building process became a conversation.
The Substrate (OpenTofu/Libvirt)
I asked the Terraform Architect to build me a 3-node cluster on KVM. I didn't write the HCL; I reviewed it.
# 4. The Virtual Machines
resource "libvirt_domain" "antigravity_node" {
count = 3
name = "antigravity-node-${count.index + 1}"
memory = "4096" # 4GB RAM
vcpu = 2
cloudinit = libvirt_cloudinit_disk.commoninit[count.index].id
network_interface {
network_name = "default"
wait_for_lease = true
}
}
It handled the tedious parts—loops, cloud-init injection, disk sizing—instantly.
The Configuration (Ansible)
Next, I needed to bootstrap Kubernetes. I summoned the Ansible Guru. The goal: Install K3s idempotently.
- name: Build K3s Cluster
hosts: master
become: true
pre_tasks:
- name: Download K3s binary manually
get_url:
url: "https://github.com/k3s-io/k3s/releases/download/{{ k3s_release_version }}/k3s"
dest: "/usr/local/bin/k3s-{{ k3s_release_version }}"
roles:
- role: PyratLabs.k3s
The AI understood the assignment: "Download binary, link it, run the role." Simple. Clean.
The Autopilot (GitOps)
Finally, the "Last Stack" concept means I should never touch kubectl manually after the initial setup. Everything must be GitOps. I generated the ArgoCD root application:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: argocd
spec:
source:
repoURL: 'https://github.com/VuokkoVuorinnen/bulkhead-core.git'
path: kubernetes/infrastructure
syncPolicy:
automated:
prune: true
selfHeal: true
Now, the infrastructure manages itself.
Conclusion
Imagine walking into an office today. You open your laptop, boot up your CLI agent, and say: "Let's build a Cathedral."
You aren't hired for your ability to memorize the syntax of libvirt_domain. You are hired for your ability to orchestrate the AI that writes it. You are hired for your taste, your standards, and your ability to spot when the AI is hallucinating a module that doesn't exist.
The "Last Stack" isn't about the specific tools (K3s, OpenTofu, ArgoCD)—it's about the workflow. It's about moving from "Manual Laborer" to "Fleet Admiral."