MCP : Model Context Protocol
Introduction
If you’ve built anything around LLMs – chatbots, internal assistants, developer tools—you’ve probably hit the same wall: the model is smart, but it’s “trapped.” It can’t directly fetch the latest numbers from your database, read files from your system, or trigger real workflows unless you wire everything up manually.
That’s where MCP (Model Context Protocol) comes in.
Model Context Protocol (MCP) is an open protocol that standardizes how AI apps connect to external tools and data sources – so integrations don’t have to be reinvented for every model, every app, and every system. The official docs describe it as a universal way to connect LLM applications with the context they need, often compared to a “USB-C port for AI apps.”
In this Blog, we’ll cover:
- What MCP is in plain terms
- How MCP works (clients, servers, messages, transports)
- Why MCP matters for real products and enterprise workflows
- How MCP connects models with external tools and data safely
- A simple mental model + a starter example you can build on
What MCP really solves (in simple words)
Before MCP, most teams connected models to tools using custom integrations:
- One connector for Google Drive
- Another for Slack
- Another for internal APIs
- Another for databases
…and every app rewrote similar glue code again and again.
MCP flips that model.
With MCP, tool/data providers expose capabilities through an MCP server, and AI applications act as MCP clients that can discover and call those capabilities in a consistent way.
Think of it like this:
- APIs standardized how web apps talk to services
- MCP standardizes how AI apps talk to tools + context providers
MCP architecture: Client ↔ Server
At a high level, MCP has two main roles:
1) MCP Client
This is the AI application layer—examples: a chat app, an IDE assistant, an internal enterprise bot.
The client:
- Connects to one or more MCP servers
- Discovers what they offer
- Requests context or executes tool actionsConclusion
2) MCP Server
This wraps a tool/data source and exposes it via MCP.
A server might provide:
- Access to files
- Database queries
- Ticket creation tools
- Company policy retrieval
- A “search” function
- A workflow runner
This client/server model is explicitly how MCP is described in the original announcement and official learning docs.
How MCP works under the hood
MCP uses JSON-RPC messaging
MCP is built on JSON-RPC 2.0, which means the client and server communicate using structured request/response messages (and can also send notifications).
This matters because it makes integrations predictable:
- Requests have an id, method, and params
- Responses return result or error
- Everything is machine-readable and tool-friendly
MCP supports multiple transports
MCP can operate over standard web transports such as HTTP and HTTPS, making it easy to deploy in cloud and enterprise environments.
Using HTTP-based communication allows MCP servers to run as regular web services, protected by familiar mechanisms like TLS, API gateways, and firewalls.
This makes integration with existing infrastructure straightforward, without requiring special networking setups.
Because the protocol is transport-agnostic, the same MCP logic can work over HTTP today and evolve with future transport options tomorrow.
Practical takeaway: MCP can work for local developer tools and remote enterprise services.
MCP “primitives”: Tools, Resources, Prompts
Most developers care about MCP because of what it exposes to the model. MCP formalizes the ways servers provide “capabilities” to clients. The official architecture guidance highlights this as a key layer.
Common primitives you’ll see:
Tools
Callable actions the model can invoke.
Examples:
- search_kb(query)
- create_jira_ticket(payload)
- run_sql(sql)
- summarize_document(doc_id)
Resources
Readable context the model can fetch.
Examples:
- A policy document
- A file path
- A database record
- A knowledge-base article
Prompts
Reusable prompt templates or guided workflows the server can provide to the client (useful for consistent behavior across teams).
End-to-end flow: What happens when a user asks something?
Here’s a simple “what actually happens” story:
User asks: “Can you raise an IT ticket for VPN access and attach my laptop details?”
- Client receives the user message
- Client decides it needs external actions (ticket system + device info)
- Client calls an MCP server to:
- fetch laptop/device details (resource).
- create a ticket (tool)
- MCP server responds with structured results
- Client feeds relevant results back into the model
- Model produces a final user-facing response (“Ticket created: INC-12345…”)
Illustration idea
A simple diagram: User → AI App (MCP Client) → MCP Server(s) → Tools/Data → back to AI App → User
Why MCP matters (beyond “yet another protocol”)
1) Less custom integration work
Instead of writing one-off connectors for every model/app/tool combination, MCP provides a shared contract.
2) Cleaner separation of concerns
- Tool providers focus on building servers
- AI app teams focus on product experience
- You can swap models without rebuilding the entire tool layer
3) Better “agent” workflows
Agents aren’t just chat—they take actions, pull context, and execute steps. MCP gives a standardized way to do that across systems.
4) Security becomes a first-class topic
Tool access is sensitive. MCP has explicit security and authorization guidance (including modern OAuth-style flows in its docs/tutorials).
Security and access control: what to do in real systems
If you’re exposing powerful tools (like “send email”, “run SQL”, “access HR files”), security can’t be an afterthought.
MCP security guidance covers risks and best practices for authorization and secure operation, including token-based authenticated requests.
Practical security checklist :
- Prefer scoped access over all-or-nothing tokens
- Log tool calls (who, what, when) for auditability
- Validate and sanitize inputs (prompt injection defense)
- Use HTTPS + secure token storage where applicable
- Limit tool permissions by role (HR tools ≠ Engineering tools)
Code: Build a tiny MCP server in Python
This is a minimal MCP server using the FastMCP helper
1) Install dependencies
# If you use uv (recommended in MCP docs) uv venv source .venv/bin/activate uv add "mcp[cli]"
2) Create mini_mcp_server.py
This example exposes 2 tools:
- greet(name) – simple demo tool
- add(a,b) – basic “action” tool
from mcp.server.fastmcp import FastMCP
# Name your server (clients will display this)
mcp = FastMCP("mini-tools")
@mcp.tool()
def greet(name: str = "World") -> str:
"""Return a friendly greeting."""
return f"Hello, {name}!"
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two integers."""
return a + b
def main():
# Use stdio transport for local MCP hosts (Claude Desktop, some IDE clients)
mcp.run(transport="stdio")
if __name__ == "__main__":
main()
This matches the same pattern used in the MCP tutorial: create FastMCP, decorate tools with @mcp.tool(), then mcp.run(transport=”stdio”).
3) Run the server
uv run mini_mcp_server.py
At this point, your server is listening via stdio for an MCP-compatible host/client.
Connect it to a client (example: Claude Desktop config)
Many people test MCP servers by connecting them to a desktop host that can discover tools. The MCP server tutorial shows how to add an MCP server entry in the Claude Desktop config JSON, including command and args.
A typical config looks like this shape:
{
"mcpServers": {
"mini-tools": {
"command": "uv",
"args": ["--directory", "/ABSOLUTE/PATH/TO/FOLDER", "run", "mini_mcp_server.py"]
}
}
}
After you restart the host, it should discover the tools exposed by your server.
What the client actually does (in one short checklist)
When you ask the AI something like “Say hi to Ashish and add 2 + 5”, the client typically:
- Calls list tools on the MCP server
- Sends your message to the LLM along with tool descriptions
- The LLM decides whether it needs a tool call
- The client executes the tool via MCP
- Tool results are returned to the LLM
- The final response is shown to the user
This is the key idea: the model isn’t “directly calling your database.” The client is orchestrating, and MCP standardizes that connection.
Conclusion
MCP is a practical shift in how we connect AI to reality.
Instead of bolting on integrations one by one, MCP gives you a shared protocol where:
- Servers expose tools and data in a consistent way
- Clients (AI apps) discover and use them reliably
- The model gets better answers because it can pull real context and take real actions
- Security and governance can be designed into the workflow from the start
