Your Running Phoenix App Is Already an MCP Server
Tidewave adds three lines to your Phoenix endpoint and turns your live app into an MCP server — so AI coding agents can query your database, eval code, and read logs against the actual runtime, not a static snapshot.
The AI coding agent problem, stated plainly: an agent that only knows your source files doesn’t understand your running system. It can read your Ecto schemas and infer what the database probably contains, but it can’t see what it actually contains. It can guess at your application’s runtime behavior, but it can’t observe it. Static analysis hits a ceiling, and the ceiling shows up exactly when the questions get interesting.
Tidewave — built by José Valim and published on hex.pm — takes a different approach. Add three lines to your Phoenix endpoint in dev mode, and your running application becomes an MCP server. The coding agent isn’t reading files anymore. It’s querying your live system.
if Mix.env() == :dev do
plug Tidewave
end
That’s the entire integration. From there, any MCP-compatible editor or agent gains access to a set of runtime tools: project_eval to execute arbitrary Elixir in your app’s context, execute_sql_query to query your database through your Ecto repos, get_logs to pull application telemetry, get_ecto_schemas to introspect your actual schema definitions. Not approximations. The live thing.
What makes this land so cleanly in Phoenix isn’t the MCP protocol — it’s that the BEAM was always designed to be interrogated while running. Erlang’s original runtime included remote shell access, observer tooling, and :sys.get_state/1 for peeking at any GenServer’s live state. IEx.pry, IEx.break!, the ability to attach a remote shell to a production node and evaluate code in place — these aren’t bolt-ons. They’re design assumptions that have been true since before most developers working today started writing code.
Tidewave surfaces that introspectability through a protocol that AI coding agents already speak. The agent doesn’t need to simulate your application’s behavior; it can just run code against it and observe the result. That’s a fundamentally different capability than what LSP-based tooling provides.
The implementation of project_eval is worth a look. When an agent calls it, Tidewave spawns a monitored process to run the evaluation, captures both the result and any IO output, and enforces the timeout through plain OTP:
{pid, ref} = spawn_monitor(fn ->
send(parent, {:result, eval_with_captured_io(code, arguments, json?, inspect_opts)})
end)
receive do
{:result, result} -> {:ok, result}
{:DOWN, ^ref, :process, ^pid, reason} ->
{:error, "Process exited with reason: #{Exception.format_exit(reason)}"}
after
timeout ->
Process.demonitor(ref, [:flush])
Process.exit(pid, :brutal_kill)
{:error, "Evaluation timed out after #{timeout} milliseconds."}
end
No third-party concurrency primitives. No async task library. Just spawn_monitor and a receive block with an after clause. The BEAM handles the rest. If the evaluation crashes, the monitored process sends a :DOWN message. If it hangs, the timeout fires and Process.exit with :brutal_kill ends it cleanly. The parent process is never at risk.
This is the pattern that makes the BEAM a uniquely good substrate for AI tools. Process isolation means an agent’s project_eval call can’t corrupt your application’s state — each evaluation runs in its own process, lives and dies independently, and can be killed without affecting anything else. The actor model that makes Elixir good for concurrent web applications is the same thing that makes it safe to hand an AI agent a Code.eval_string/2 with real production modules loaded.
The Tidewave docs are honest about what the MCP interface doesn’t include. In-browser agents, point-and-click prompting, and Figma integration are all Tidewave Web features — the product behind the protocol. The MCP server is a subset, intentionally. You get runtime intelligence; the visual layer is a separate product. For a lot of teams, the MCP server is what they actually need, and it costs nothing beyond adding the plug.
The comparison to LSP is worth dwelling on. Language server tooling is static by design — it was built for IDEs navigating source files, and its mental model is FILE + LINE + COLUMN. Tidewave’s mental model is module and function. If the agent needs docs for MyApp.Accounts.get_user!/1, it calls get_docs with the module and function name. If it needs to know what your users table actually contains right now, it calls execute_sql_query. Neither of those questions has a good LSP answer.
The BEAM has been runtime-first for forty years. Tidewave is making that legible to the agents.
Enjoy this issue?
Get ElixirLens every Monday — sharp takes on Elixir and the trends shaping it.