#6
AI Infrastructure

The Quiet Runtime That Ate AI Infrastructure

Three projects dropped this month and they all point at the same thing. The BEAM community has quietly crossed a line.

Code BEAM Lite Vancouver was yesterday. If you follow the talks that actually get accepted at these things — not the ones submitted, the ones selected — you can see a decade of Elixir’s reputation quietly shifting under your feet. “Elixir for web” is barely a conference category anymore. The 2026 circuit is about Elixir for systems: AI orchestration, distributed infrastructure, edge. Something changed. The question is when.

Three Projects, One Pattern

The answer, if you’ve been watching the community closely, is this month.

Three separate projects appeared in the Elixir ecosystem over the past few weeks, built by different people with different goals, and they all crossed the same line.

The first is erl_dist_mcp, an MCP server that gives AI assistants direct access to running BEAM nodes via the distribution protocol — the same wire format your cluster nodes use to find each other and pass messages. There’s something worth sitting with here: when a developer decided “AI assistants should be able to talk to a running system,” the interface they reached for was a protocol Erlang has had since the early nineties. Not a new API. Not a custom bridge. The distribution protocol. It was already there, already correct, already the right shape for the problem.

The second is os_sup_mcp, which takes an OTP Supervisor and turns it into a sandboxed execution environment for AI tools. It spins up isolated child BEAM nodes on demand — full Elixir applications — and exposes them over MCP/JSON-RPC. The supervisor manages their lifecycle: starts them, monitors them, cleans them up. What Python teams are building from scratch under deadline pressure — isolated sandboxes with lifecycle management for AI tool execution — the BEAM already had the primitives for. Someone just noticed.

The third is claude_code, a Hex package that treats Claude agent sessions as GenServers. Native Elixir streams with backpressure, direct integration with LiveView and PubSub, distributed sessions that offload heavy CLI processes to dedicated sandbox servers via Erlang distribution. It’s not a thin wrapper around the Claude API. It’s an opinionated SDK that says: if your AI agent is going to run in production, it should run the way OTP runs things. Supervised, distributed, observable, restartable.

Three projects. Three developers. One direction.

This isn’t a uniquely Elixir instinct. The Python ecosystem is building toward the same destination — LangGraph added stateful, resumable agent graphs specifically to solve the problem of agents that die mid-task and lose their state; Temporal is gaining traction as a dedicated durability layer for AI pipelines; OpenAI’s own Swarm framework is fundamentally a process supervision model dressed in Python syntax. Every major ecosystem is arriving at the same set of problems: how do you keep agents alive across failures, isolate their execution environments, and coordinate them at scale without reimplementing distributed systems from scratch?

The difference is the starting point. Python teams are making architecture decisions right now — choosing between Redis and Postgres for agent state, wiring up external rate limiters to handle backpressure, bolting Temporal onto their stack as a separate failure domain. These are real choices with real operational costs. Elixir developers made equivalents of those choices once, in mix.exs, when they picked OTP. GenServer‘s handle_call blocks the caller; backpressure is the default, not a library. DynamicSupervisor is the isolation layer. Erlang distribution is the coordination primitive. :observer ships with OTP and shows you in-flight state across your cluster.

The Elixir community isn’t building toward those answers. It’s finding out the answers were already there.

What the Conference Circuit Is Actually Saying

This is what makes the 2026 conference circuit readable as signal rather than noise. The talks people propose to a conference reflect the problems they solved six to twelve months earlier. The talks conferences accept reflect what the committee thinks is directionally true for the industry. What’s getting accepted in 2026 — at Vancouver, and almost certainly at ElixirConf EU in Málaga next month — isn’t “how to build a Phoenix app.” It’s how to build systems that coordinate AI workloads at scale, survive failures, and stay observable when things go sideways.

That’s a different vocabulary than “Elixir for web.” It’s the vocabulary of infrastructure.

The move is happening gradually and then all at once. The inflection isn’t a framework release or a José Valim keynote. It’s three developers independently concluding that the right interface for AI agents is an OTP Supervisor — and building three tools that prove it.

Why Boring Elixir Is the Reason Anyone Trusts This

Here’s the thing nobody puts in the headline: none of this works as an argument if the boring Elixir track record doesn’t exist.

The Phoenix apps serving JSON with five-nines uptime. The Oban jobs grinding through millions of records without drama. The LiveView dashboards that just never go down. These aren’t the interesting stories. Nobody writes conference talks about them. They are, however, the reason AI infrastructure developers are looking at the BEAM in the first place.

Orchestration durability is only a selling point if the runtime it runs on is already trusted. The BEAM is trusted because it has been boring in production for a decade — reliable, concurrent, fault-tolerant in ways that don’t require heroics from the developers running it. That track record is the foundation. The AI infrastructure moment is built on top of it.

The instinct in 2026 is to chase the novel: the MCP server, the distributed agent session, the LiveView token stream. Those are real and interesting. But the deeper story — the one the conference circuit is trying to tell, if you listen to the whole thing — is that a runtime designed to keep telephone switches alive in the 1980s turns out to be exactly the right shape for keeping AI agents alive in 2026.

Not because it was designed for it. Because reliable is reliable.


ElixirConf EU is April 23-24 in Málaga. If the schedule tracks the same direction, we’ll have a cleaner picture of where this goes.