#4
MCP / AI-Native Backends

Your Phoenix App Just Became an AI-Operable API

Ectomancer turns your Ecto schemas into MCP tools. The schema is the contract — and that changes what 'AI-native backend' actually means.

There’s a library that shipped this week called Ectomancer. It does one thing: it reads your Ecto schemas and automatically exposes them as Model Context Protocol (MCP) tools. Point Claude — or any MCP-aware agent — at a Phoenix application running Ectomancer, and the agent can query your data model conversationally without you writing a single tool definition. The schema is the contract. The schema has always been there. You just never needed it to talk to an AI before.

That’s the detail that makes this interesting. Not the automation — the implication.

What MCP Actually Is

MCP (Model Context Protocol) is an open standard that Anthropic introduced in late 2024. The short version: it’s a protocol for AI assistants to connect to external tools, data sources, and services in a standardized way. Instead of every application reinventing how to give an AI access to its data, MCP specifies the shape of the conversation. The AI asks: “what can you do?” The server responds with a list of tools and their schemas. The AI calls the tools. Results come back.

Think of it as function calling with a discovery layer — the AI doesn’t need to be pre-configured with your API’s endpoints. It asks, and you tell it.

MCP published its 2026 roadmap a few weeks ago, with transport scalability, agent-to-agent communication, and enterprise auth on the docket. It’s not a niche experiment anymore. It’s becoming infrastructure.

The Elixir Angle Is Structural

The Elixir community has shipped three MCP server libraries in the past few months: anubis_mcp, vancouver, and NexusMCP — the last one built explicitly around per-session GenServer architecture to solve the session isolation problem the earlier two had.

That problem is worth naming: a naive MCP server implementation is tempting to build around a single shared process. It’s simpler. It’s wrong. When you have ten simultaneous AI agent sessions, shared state between them means one agent’s query can observe or corrupt another agent’s context. This is a footgun Phoenix developers recognize from LiveView — shared state in a real-time system bites you hard.

The per-session GenServer pattern fixes it cleanly: each MCP client connection gets its own supervised process, isolated state, and independent lifecycle. If that session crashes, the supervisor restarts it without touching the others. This is OTP doing what it was built for, applied to a new domain.

Ectomancer sits on top of this and takes the abstraction one level higher. The thing you’ve always had — your Ecto schema — becomes the interface. No additional code required.

What “AI-Native Backend” Actually Means

The phrase “AI-native” has been deployed so aggressively in the past two years that it’s lost meaning. Most things described as AI-native are just REST APIs with a GPT call somewhere in the request path.

Ectomancer points at something more honest: an AI-native backend is one where the AI can discover and interact with your data model directly, without you mediating every query through a hand-written tool. The difference is surface area. A REST API exposes exactly what you decided to expose, in exactly the shape you decided to expose it. An MCP-enabled app exposes what it is — and lets the AI reason about how to use it.

That’s a different architectural posture. It trades explicit control for adaptive capability. You lose the guarantee that the AI will only touch what you’ve pre-approved, and you gain the ability to have the AI do things you didn’t specifically anticipate.

For internal tooling, data exploration, and developer-facing products, that tradeoff is usually worth taking. For consumer-facing apps handling sensitive data, you want to be careful about what schemas you expose — MCP access controls are still maturing, and the default posture is permissive.

The Practical Path

If you have a Phoenix application with Ecto schemas and you want to experiment with MCP access, the path right now is:

  1. Add ectomancer to your deps and run mix deps.get
  2. Mount the MCP endpoint in your router (the library handles the protocol)
  3. Point Claude Desktop or any MCP client at the endpoint URL

You’ll immediately be able to ask the AI questions like “what tables does this app have?” and “show me the last ten users with confirmed emails” — without writing a single query or tool definition. The AI infers what’s available from the schema, generates the queries, and calls back through the MCP tool interface.

There’s obvious room for this to go wrong in production. Schema-level access is broader than most applications want to expose. But as a development-time tool for exploring data, or as the foundation for internal AI-powered admin interfaces, it’s genuinely useful today.

The Take

Ectomancer is not the library that changes everything. It’s one useful tool built on top of the right structural foundation: MCP as protocol, GenServer as session primitive, Ecto schemas as the self-describing contract.

What it signals is more interesting than the library itself: the BEAM process model, which was already a good fit for real-time applications and distributed systems, turns out to be an excellent fit for AI agent infrastructure. Long-lived connections, per-client isolation, supervision, distribution — these are the properties a production MCP server needs. Elixir had them before anyone was asking for them in this context.

The ecosystem moving fast here isn’t surprising. It’s the right tool hitting the right moment.