Why AI infrastructure companies are lining up behind Anthropic's MCP

Model Context Protocol (MCP) was introduced last November by Anthropic, which called it "an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools." After kicking the tires for a few months, vendors are jumping on board.

Why AI infrastructure companies are lining up behind Anthropic's MCP
Photo by Levi Jones / Unsplash

The hype always outpaces the tooling, and we're seeing this happen once again with agentic AI; glorious visions of the future are aplenty, practical solutions for actually making those visions happen, not so much. That's just one reason why Salesforce and Matthew McConaughey don't expect to see meaningful revenue from Agentforce until next year, but a new protocol is gaining momentum as a vendor-neutral way to make AI agents much easier to implement.

Model Context Protocol (MCP) was introduced last November by Anthropic, which called it "an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools." After kicking the tires for a few months, vendors are jumping on board: Last week Microsoft announced that it would add support for MCP into Copilot Studio, and on Tuesday Cloudflare announced that it now supports remote MCP servers on its infrastructure.

The analogy isn't perfect, but MCP is a little like AI's version of the API, which made web-based computing the standard for just about everything by allowing software applications to talk to each other over the internet. The protocol takes natural language input from a large-language model and provides a standard way for MCP clients (apps running on your laptop or phone) to find and retrieve data stored on servers running MCP, which could allow AI agents to take actions based on that data.

"When connecting to an MCP server, actions and knowledge are automatically added to the agent and updated as functionality evolves. This simplifies the process of building agents and reduces time spent maintaining the agents," Microsoft said last week.

Until recently, most developers that wanted to use MCP in their apps actually needed to set up an MCP server locally, either on their machine or on a local network. That is impractical for most users, said Rita Kozlov, vice president of product at Cloudflare, in an interview Tuesday.

For example, Cursor supports MCP to allow its users of its coding editor to query databases or create pull requests on GitHub using natural-language commands, but the developer has to either run that server on their own machine or manage a remote server using SSE Transport.

Cloudflare's new service allows those developers to set up a MCP server on Cloudflare Workers that is managed by Cloudflare and allows MCP clients to log in from wherever they are: "People simply sign in and grant permissions to MCP clients using familiar authorization flows," Cloudflare said Tuesday.

"What we believe is that we'll actually see businesses pop up in a way that are actually agent or MCP-native that didn't exist before, where you can have a service that's designed around basically being entirely called on by LLMs," Kozlov said.

Like most AI agents themselves, MCP isn't quite ready for enterprise prime time.

Several pieces of the puzzle need to be solved before MCP becomes a foundational layer of the AI internet, as this deep dive from a16z's Yoko Li — written last week before Cloudflare's announcement — points out. Cloudflare's remote MCP servers check off two of the obstacles on that list, including support for multitenancy and authorization, which is done through support for OAuth.

Still, "MCP lacks a built-in permissions model, so access control is at the session level — meaning a tool is either accessible or completely restricted," Li wrote, and that's a big requirement for most enterprise users. And payment security remains an unsolved problem, according to Kozlov, which would make it harder to use MCP in applications that allow people to order something with natural-language commands or authorizes an agent to start payment on an open invoice.

Still, most companies have found it much too hard to build and deploy generative AI applications — let alone agentic AI applications — in part because they require so much custom work to connect LLMs, data sources, and end users. MCP could make that process much easier.

"If done right, MCP could become the default interface for AI-to-tool interactions and unlock a new generation of autonomous, multi-modal, and deeply integrated AI experiences," Li wrote.

(This post originally appeared in the Runtime newsletter on March 25. Sign up here to get more enterprise tech news three times a week.)

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.