Dictionary

MCP (Model Context Protocol)

MCP, short for Model Context Protocol, is an open standard from Anthropic that lets AI models talk to tools and data in a uniform way. One protocol replaces dozens of integrations that every chatbot or agent used to build from scratch.

What is MCP?

MCP stands for Model Context Protocol. It is an open protocol Anthropic published in November 2024 that has since picked up broad support from other model makers, tool vendors, and cloud platforms. The aim is simple: one standardised way for AI models to call tools and data, no matter which model or chatbot you use.

Without MCP, every integration between an AI assistant and a source system (CRM, database, SharePoint, ticketing tool) has to be built on its own. You quickly end up with N times M connections. MCP turns that into N plus M: one MCP server per source system, one MCP client per AI environment.

Think of MCP as a USB port for AI. A USB stick works on any laptop without someone writing drivers for every combination. MCP tries to do the same between language models and business tools.

Why is MCP needed?

Language models on their own can only generate text. To do anything useful with business data they need to read (pull documents from SharePoint, fetch customer records from a CRM), write (create a ticket, book a meeting), and act (run a SQL query, call an API).

Until recently that was done through function calling or tool use, always model-specific. OpenAI, Anthropic, Google, and Mistral each had their own notation and their own way of handling authorisation. Building a chatbot that could browse three systems meant writing integration code for every combination.

MCP solves that with three parts: a server that exposes tools and data, a client that calls the server, and a protocol that lets both sides talk to each other in a standardised way.

How does MCP work?

Server
An MCP server is a small process that exposes one source system or toolset. A GitHub server lets you reach repos, issues, and pull requests. A Postgres server runs queries. A SharePoint server searches documents. The server describes which tools, resources, and prompts it offers.

Client
The client lives inside an AI environment such as Claude Desktop, an IDE, or a chatbot. It discovers which servers are available, asks what they can do, and forwards requests to the language model.

Protocol
Communication happens over JSON-RPC. Tools are defined as functions with parameters and descriptions, resources as readable content, prompts as reusable templates. Everything standardised.

Authorisation
MCP has explicit concepts for consent. A server can mark certain actions as requiring confirmation, and a client has to show the user what it is about to do before it happens. That matters most for write operations.

MCP versus function calling versus RAG

MCP versus function calling

Function calling is the model-specific way to let an LLM invoke tools. Each model defines its own schema. MCP adds a portable layer on top. Under the hood an MCP client still uses function calling to tell the model which tool it can invoke, but the tool definition itself is portable between models.

MCP versus RAG

RAG adds knowledge by pulling documents from an index. MCP adds actions the model can actively take to fetch or execute something. In practice they complement each other: an MCP server can expose a RAG search as one of its tools, and the model decides when to use it.

Where does MCP stand today?

Since its release in 2024, MCP has been adopted quickly. Anthropic open-sourced the protocol, Microsoft announced MCP support in Copilot Studio and Visual Studio, and a growing ecosystem of servers exists for GitHub, Slack, Google Drive, Postgres, Puppeteer, and dozens of other tools. Most AI agent frameworks now support MCP alongside their own tool definitions.

The expectation is that MCP will play the same role for AI integration that REST plays for web APIs or LSP plays for editors. Not everyone will use it, but anyone without MCP support will have to explain why.

Pitfalls and things to watch

Authorisation is still your problem
MCP standardises communication, not the security of the underlying source. An MCP server that exposes Postgres still needs its own connection string, permissions, and auditing. Think service principals and scoped tokens rather than shared accounts.

Tool choice is not free
The more tools you offer a model, the more context window they eat and the more often the model picks the wrong one. Limit the number of tools per session and group them per use case.

The protocol is still moving
MCP is young and the spec is still evolving. Expect at least one major version upgrade a year and keep your servers and clients current.

Last Updated: April 23, 2026 Back to Dictionary
Keywords
mcp model context protocol anthropic ai agent llm rag tool calling function calling integration ai claude copilot