← All writing

The Accidental Operating System

For years, people criticised Holacracy as too rigid — obsessed with explicitly defined roles, constitutional governance, distributed authority, and transparent accountability. Too much structure for messy, human organisations.

But it turns out that wasn't a bug. It was a feature.

The structure nobody wanted

I spent a decade at Amazon, where structure was the air you breathed. Every team had a clear mandate. Every service had an owner. Every decision had a mechanism. It felt constraining at times, but it worked — at extraordinary scale.

When I moved into smaller organisations, I found the opposite. Roles were fluid, which sounded liberating but usually meant nobody quite knew who owned what. Decisions happened in corridors, or didn't happen at all. Knowledge lived in people's heads and walked out the door when they left. This is what I'd later describe as Mountain One — things work, but only because specific people make them work.

Holacracy tried to fix this. It took the kind of explicit structure that large organisations develop over decades and packaged it as a system anyone could adopt. Defined roles, not job titles. Constitutional governance, not management by charisma. Distributed authority, not delegation that could be revoked on a whim.

The problem was that humans didn't particularly enjoy it. We like ambiguity. We like the informal. We like being able to say "that's not really my job" while simultaneously doing someone else's job because we're good at it. Holacracy demanded a level of explicitness that felt bureaucratic and cold to most teams that tried it.

What agents actually need

AI agents don't thrive in traditional hierarchies. They need clear boundaries, which explicit roles provide. They need decision protocols, which constitutional governance provides. They need autonomy, which distributed authority provides. They need accountability, which transparent ownership provides.

Holacracy as an operating system for AI agents — mapping human coordination challenges through Holacracy principles to AI enablers

Every frustration humans had with Holacracy maps directly to something an AI agent requires. The rigid role definitions that felt dehumanising? An agent needs to know exactly what it's responsible for. The constitutional governance that felt like bureaucracy? An agent needs explicit decision protocols or it will hallucinate its own. The transparent accountability that felt like surveillance? An agent needs clear ownership to know who to report to and what to escalate.

The operating system analogy

The insight is this: Holacracy was designed to make human coordination explicit — but in doing so, it created the kind of structure AI agents need to collaborate effectively. It's like designing an operating system for a computer that didn't exist yet.

An operating system manages resources, enforces boundaries, handles permissions, and coordinates processes. That's exactly what Holacracy does for an organisation. The fact that humans found it too rigid is precisely what makes it suitable for agents, which need rigidity to function reliably.

This doesn't mean every organisation should adopt Holacracy. But it does mean that the organisations succeeding with AI aren't the ones with the best AI strategy. They're the ones that already know how to operate with clarity, explicit processes, and distributed decision-making — the ones that have climbed from Mountain One to Mountain Two.

Twenty years early

The future isn't about replacing humans with AI. It's about building systems where humans and agents work side by side, with clear roles, shared rules, and complementary strengths.

Holacracy might have been twenty years early. But it was preparing us for this moment. The organisations that dismissed it as too structured may find themselves scrambling to build exactly that structure when they try to deploy agents at scale. And the few that persisted with it may discover they've been running the right operating system all along — they just didn't know what it was for.