Insights
Since Anthropic released the Model Context Protocol in November 2024, it's become one of the most discussed topics in AI development circles, with industry publications describing it as "suddenly on everyone's lips" and "the new HTTP for AI agents."
The industry response has been remarkable: OpenAI is adding MCP support across their products, Microsoft has integrated it into Copilot Studio, AWS is supporting it in Amazon Bedrock, and both PydanticAI and Databricks have announced full implementations.
This unprecedented collaboration between competitors signals something important: MCP isn't just another technical specification—it's solving fundamental problems that benefit the entire AI ecosystem.
In 2025, it's widely recognised that large language models (LLMs) serve as genuinely powerful tools. However, they become far more powerful when tightly connected to relevant information and context.
Without this connection, LLMs can behave unpredictably. Consider this analogy:
This is hallucination in LLMs – making confident assertions without checking available facts.
This is lack of proper tooling in AI systems.
This is the context management problem in LLMs – having access to information isn't enough if you can't effectively retrieve the relevant parts at the right time.
Anthropic's Model Context Protocol (MCP) elegantly addresses all three limitations by creating a standardised way for LLMs to access information, tools, and manage context when needed.
At its core, MCP (Model Context Protocol) is a standardised language and framework that connects LLMs to the information they need when they need it.
Developed by Anthropic and released as an open-source project in November 2024, MCP provides “a simpler, more reliable way to give AI systems access to the data they need" while enabling persistent context management beyond the limitations of traditional context windows.
MCP uses a straightforward client-server architecture:
This de-coupled architecture creates a flexible system where LLMs can request information or tool access precisely when needed, rather than trying to incorporate everything into the initial prompt. By separating context management from tool execution, MCP enables more efficient and maintainable AI systems that can evolve independently.
MCP addresses three fundamental challenges in AI system development:
What makes MCP powerful is its elegant approach to these problems.
Communication language
The protocol layer uses four JSON-RPC 2.0 message types:
This minimal set creates clear, predictable communication patterns between MCP clients and servers, similar to how well-designed APIs function in modern software engineering.
Message transport
The transport layer offers two standardised methods:
Context management
MCP's architecture enables strategic context management through:
In essence, MCP embodies the principle that it's not the size of the context window that matters, it's how you use it. Rather than simply racing to expand context windows (as many AI companies were doing), Anthropic developed a more elegant solution that enables models to strategically access and manage relevant information when needed.
This approach directly addresses the “U-shaped retrieval" challenge – where models struggle to access information in the middle of large context windows.
Use MCP when you need:
Consider alternatives when:
MCP is primarily about extending your LLM's capabilities beyond just generating text—letting it access fresh information, maintain context across complex interactions, and interact with other systems in real time.
Let's dive in…
In the following video, we're going to get hands-on and actually set up an MCP server that will allow Claude to do real-time web searching the way we might find in ChatGPT.
MCP has evolved from an Anthropic initiative into an industry standard. Its architecture connects LLMs to tools and data precisely when needed.
OpenAI, Google, AWS, Microsoft, and PydanticAI have all embraced MCP, with specialised servers emerging daily. While simple implementations like web search show immediate value, the true power lies in MCP's modularity.
This rare industry convergence signals something profound: MCP solves fundamental problems that benefit the entire AI ecosystem.
Interested in bringing this modular approach to your AI strategy? We'd be happy to discuss the possibilities.
Tomoro works with the most ambitious business & engineering leaders to realise the AI-native future of their organisation. We deliver agent-based solutions which fit seamlessly into businesses’ workforce; from design to build to scaled deployment.
Founded by experts with global experience in delivering applied AI solutions for tier 1 financial services, telecommunications and professional services firms, Tomoro’s mission is to help pioneer the reinvention of business through deeply embedded AI agents.
Powered by our world-class applied AI R&D team, working in close alliance with Open AI, we are a team of proven leaders in turning generative AI into market-leading competitive advantage for our clients.
Continue Reading
We’re looking for a small number of the most ambitious clients to work with in this phase, if you think your organisation could be the right fit please get in touch.