← All episodes
Ep. 03·March 1, 2024·Cody Feda

MCP

Structural

The USB-C port that lets AI plug into your tools

tl;dr

$ Model Context Protocol (MCP) is an open standard developed by Anthropic that defines a uniform interface for connecting AI models to external tools, data sources, and services - replacing ad-hoc function-calling integrations with a single protocol that any compliant host can speak and any server can implement.

The problem before MCP

Before MCP, every AI tool integration was bespoke. If you wanted your LLM to query a database, you wrote custom code to define the tool schema, handle authentication, parse the response, and inject the result back into the conversation. If you wanted it to search the web, you wrote different custom code. If you changed LLM providers, you rewrote everything.

This is the state of most AI tool ecosystems circa 2023: a tangle of proprietary integrations, each requiring its own maintenance, each slightly broken in its own unique way.

MCP is an attempt to USB-C the problem. Define one protocol. Build to the protocol once. Plug in anything.

What MCP actually defines

MCP is a client-server protocol. The host is an AI application (like Claude Desktop, Cursor, or your custom app). The client is a component inside the host that speaks MCP. The server is any external service that exposes its capabilities over MCP.

Servers expose three types of resources:

  • Tools: Functions the AI can call (run this SQL query, send this email, create this file)
  • Resources: Data the AI can read (the contents of this file, this database row, this API response)
  • Prompts: Reusable prompt templates the server offers to the host

The wire protocol uses JSON-RPC 2.0 over stdio (for local processes) or HTTP with Server-Sent Events (for remote servers). The host and server negotiate capabilities on connection, so a basic server doesn't need to implement features it doesn't support.

Why it's gaining traction

The network effect is the interesting part. As more tool vendors build MCP servers, every MCP-compatible AI host gets access to those tools automatically. As more AI hosts adopt MCP, every MCP server author gets access to all those users automatically.

By early 2025, hundreds of MCP servers existed: filesystem access, GitHub, Slack, Postgres, browser automation, code execution sandboxes. The Claude Desktop client could be extended with any of them by adding a JSON config entry pointing to the server.

This is the same dynamic that made npm and pip successful. The protocol itself is relatively simple - the value is in the ecosystem it enables.

Limitations to understand

MCP is a young protocol. As of early 2025, authentication for remote servers is still being standardized. Multi-server orchestration (having an AI coordinate across several MCP servers simultaneously) is handled inconsistently between hosts. The protocol's security model - especially around what a malicious MCP server could do to a host - is an active area of concern.

It's also worth noting that MCP is a protocol for integration, not for intelligence. An MCP server can give an AI the ability to query your database, but it doesn't make the AI smarter about what to query or how to interpret the results.

Structural verdict

MCP is genuinely structural - it's doing real integration work that would otherwise require custom code. But it's not load-bearing yet at the ecosystem level: most production AI deployments still use bespoke function-calling integrations, and MCP's success depends on continued adoption by both tool vendors and AI platforms. Call it structural-with-upside.

More episodes