{"id":77185,"date":"2026-01-01T13:10:32","date_gmt":"2026-01-01T07:40:32","guid":{"rendered":"https:\/\/www.tothenew.com\/blog\/?p=77185"},"modified":"2026-01-02T18:11:42","modified_gmt":"2026-01-02T12:41:42","slug":"mcp-model-context-protocol","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/mcp-model-context-protocol\/","title":{"rendered":"MCP : Model Context Protocol"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>If you\u2019ve built anything around LLMs &#8211; chatbots, internal assistants, developer tools\u2014you\u2019ve probably hit the same wall: the model is smart, but it\u2019s \u201ctrapped.\u201d It can\u2019t directly fetch the latest numbers from your database, read files from your system, or trigger real workflows unless you wire everything up manually.<\/p>\n<p>That\u2019s where <strong>MCP (Model Context Protocol)<\/strong> comes in.<\/p>\n<p><strong>Model Context Protocol (MCP)<\/strong> is an <strong>open protocol<\/strong> that standardizes how AI apps connect to <strong>external tools and data sources <\/strong>&#8211; so integrations don\u2019t have to be reinvented for every model, every app, and every system. The official docs describe it as a universal way to connect LLM applications with the context they need, often compared to a \u201cUSB-C port for AI apps.\u201d<\/p>\n<p>In this Blog, we\u2019ll cover:<\/p>\n<ul>\n<li>What MCP is in plain terms<\/li>\n<li>How MCP works (clients, servers, messages, transports)<\/li>\n<li>Why MCP matters for real products and enterprise workflows<\/li>\n<li>How MCP connects models with external tools and data safely<\/li>\n<li>A simple mental model + a starter example you can build on<\/li>\n<\/ul>\n<p><strong>What MCP really solves (in simple words)<\/strong><\/p>\n<p>Before MCP, most teams connected models to tools using <strong>custom integrations:<\/strong><\/p>\n<ul>\n<li>One connector for Google Drive<\/li>\n<li>Another for Slack<\/li>\n<li>Another for internal APIs<\/li>\n<li>Another for databases<br \/>\n\u2026and every app rewrote similar glue code again and again.<\/li>\n<\/ul>\n<p>MCP flips that model.<\/p>\n<p>With MCP, tool\/data providers expose capabilities through an MCP server, and AI applications act as MCP clients that can discover and call those capabilities in a consistent way.<\/p>\n<p>Think of it like this:<\/p>\n<ul>\n<li><strong>APIs<\/strong> standardized how web apps talk to services<\/li>\n<li><strong>MCP<\/strong> standardizes how AI apps talk to tools + context providers<\/li>\n<\/ul>\n<p><strong>MCP architecture: Client \u2194 Server<\/strong><\/p>\n<p>At a high level, MCP has two main roles:<\/p>\n<p><strong>1) MCP Client<\/strong><\/p>\n<p>This is the AI application layer\u2014examples: a chat app, an IDE assistant, an internal enterprise bot.<\/p>\n<p>The client:<\/p>\n<ul>\n<li>Connects to one or more MCP servers<\/li>\n<li>Discovers what they offer<\/li>\n<li>Requests context or executes tool actionsConclusion<\/li>\n<\/ul>\n<p><strong>2) MCP Server<\/strong><\/p>\n<p>This wraps a tool\/data source and exposes it via MCP.<\/p>\n<p>A server might provide:<\/p>\n<ul>\n<li>Access to files<\/li>\n<li>Database queries<\/li>\n<li>Ticket creation tools<\/li>\n<li>Company policy retrieval<\/li>\n<li>A \u201csearch\u201d function<\/li>\n<li>A workflow runner<\/li>\n<\/ul>\n<p>This client\/server model is explicitly how MCP is described in the original announcement and official learning docs.<\/p>\n<p><strong>How MCP works under the hood<\/strong><\/p>\n<p><strong>MCP uses JSON-RPC messaging<\/strong><br \/>\nMCP is built on<strong> JSON-RPC 2.0<\/strong>, which means the client and server communicate using structured request\/response messages (and can also send notifications).<\/p>\n<p>This matters because it makes integrations predictable:<\/p>\n<ul>\n<li>Requests have an id, method, and params<\/li>\n<li>Responses return result or error<\/li>\n<li>Everything is machine-readable and tool-friendly<\/li>\n<\/ul>\n<p><strong>MCP supports multiple transports<\/strong><\/p>\n<p>MCP can operate over standard web transports such as HTTP and HTTPS, making it easy to deploy in cloud and enterprise environments.<br \/>\nUsing HTTP-based communication allows MCP servers to run as regular web services, protected by familiar mechanisms like TLS, API gateways, and firewalls.<br \/>\nThis makes integration with existing infrastructure straightforward, without requiring special networking setups.<br \/>\nBecause the protocol is transport-agnostic, the same MCP logic can work over HTTP today and evolve with future transport options tomorrow.<\/p>\n<p><strong>Practical takeaway:<\/strong> MCP can work for local developer tools and remote enterprise services.<\/p>\n<p><strong>MCP \u201cprimitives\u201d: Tools, Resources, Prompts<\/strong><\/p>\n<p>Most developers care about MCP because of what it exposes to the model. MCP formalizes the ways servers provide \u201ccapabilities\u201d to clients. The official architecture guidance highlights this as a key layer.<\/p>\n<p>Common primitives you\u2019ll see:<\/p>\n<p><strong>Tools<\/strong><br \/>\nCallable actions the model can invoke.<br \/>\nExamples:<\/p>\n<ul>\n<li>search_kb(query)<\/li>\n<li>create_jira_ticket(payload)<\/li>\n<li>run_sql(sql)<\/li>\n<li>summarize_document(doc_id)<\/li>\n<\/ul>\n<p><strong>Resources<\/strong><\/p>\n<p>Readable context the model can fetch.<br \/>\nExamples:<\/p>\n<ul>\n<li>A policy document<\/li>\n<li>A file path<\/li>\n<li>A database record<\/li>\n<li>A knowledge-base article<\/li>\n<\/ul>\n<p><strong>Prompts<\/strong><br \/>\nReusable prompt templates or guided workflows the server can provide to the client (useful for consistent behavior across teams).<\/p>\n<p><strong>End-to-end flow: What happens when a user asks something?<\/strong><br \/>\nHere\u2019s a simple \u201cwhat actually happens\u201d story:<\/p>\n<p><strong>User asks:<\/strong> \u201cCan you raise an IT ticket for VPN access and attach my laptop details?\u201d<\/p>\n<ol>\n<li>Client receives the user message<\/li>\n<li>Client decides it needs external actions (ticket system + device info)<\/li>\n<li>Client calls an MCP server to:\n<ul>\n<li>fetch laptop\/device details (resource).<\/li>\n<li>create a ticket (tool)<\/li>\n<\/ul>\n<\/li>\n<li>MCP server responds with structured results<\/li>\n<li>Client feeds relevant results back into the model<\/li>\n<li>Model produces a final user-facing response (\u201cTicket created: INC-12345\u2026\u201d)<\/li>\n<\/ol>\n<p><strong>Illustration idea<\/strong><\/p>\n<p>A simple diagram: User \u2192 AI App (MCP Client) \u2192 MCP Server(s) \u2192 Tools\/Data \u2192 back to AI App \u2192 User<\/p>\n<p><strong>Why MCP matters (beyond \u201cyet another protocol\u201d)<\/strong><\/p>\n<p><strong>1) Less custom integration work<\/strong><br \/>\nInstead of writing one-off connectors for every model\/app\/tool combination, MCP provides a shared contract.<\/p>\n<p><strong>2) Cleaner separation of concerns<\/strong><\/p>\n<ul>\n<li>Tool providers focus on building servers<\/li>\n<li>AI app teams focus on product experience<\/li>\n<li>You can swap models without rebuilding the entire tool layer<\/li>\n<\/ul>\n<p><strong>3) Better \u201cagent\u201d workflows<\/strong><br \/>\nAgents aren\u2019t just chat\u2014they take actions, pull context, and execute steps. MCP gives a standardized way to do that across systems.<\/p>\n<p><strong>4) Security becomes a first-class topic<\/strong><br \/>\nTool access is sensitive. MCP has explicit security and authorization guidance (including modern OAuth-style flows in its docs\/tutorials).<\/p>\n<p><strong>Security and access control: what to do in real systems<\/strong><br \/>\nIf you\u2019re exposing powerful tools (like \u201csend email\u201d, \u201crun SQL\u201d, \u201caccess HR files\u201d), security can\u2019t be an afterthought.<\/p>\n<p>MCP security guidance covers risks and best practices for authorization and secure operation, including token-based authenticated requests.<\/p>\n<p><strong>Practical security checklist :<\/strong><\/p>\n<ul>\n<li>Prefer <strong>scoped access<\/strong> over all-or-nothing tokens<\/li>\n<li>Log tool calls (who, what, when) for auditability<\/li>\n<li>Validate and sanitize inputs (prompt injection defense)<\/li>\n<li>Use HTTPS + secure token storage where applicable<\/li>\n<li>Limit tool permissions by role (HR tools \u2260 Engineering tools)<\/li>\n<\/ul>\n<p><strong>Code: Build a tiny MCP server in Python<\/strong><\/p>\n<p>This is a minimal MCP server using the FastMCP helper<\/p>\n<p><strong>1) Install dependencies<\/strong><\/p>\n<pre># If you use uv (recommended in MCP docs)\r\nuv venv\r\nsource .venv\/bin\/activate\r\nuv add \"mcp[cli]\"<\/pre>\n<p><strong>2) Create mini_mcp_server.py<\/strong><\/p>\n<p>This example exposes 2 tools:<\/p>\n<ul>\n<li>greet(name) \u2013 simple demo tool<\/li>\n<li>add(a,b) \u2013 basic \u201caction\u201d tool<\/li>\n<\/ul>\n<pre>from mcp.server.fastmcp import FastMCP\r\n\r\n# Name your server (clients will display this)\r\nmcp = FastMCP(\"mini-tools\")\r\n\r\n@mcp.tool()\r\ndef greet(name: str = \"World\") -&gt; str:\r\n\"\"\"Return a friendly greeting.\"\"\"\r\nreturn f\"Hello, {name}!\"\r\n\r\n@mcp.tool()\r\ndef add(a: int, b: int) -&gt; int:\r\n\"\"\"Add two integers.\"\"\"\r\nreturn a + b\r\n\r\ndef main():\r\n# Use stdio transport for local MCP hosts (Claude Desktop, some IDE clients)\r\nmcp.run(transport=\"stdio\")\r\n\r\nif __name__ == \"__main__\":\r\nmain()<\/pre>\n<p>This matches the same pattern used in the MCP tutorial: create FastMCP, decorate tools with @mcp.tool(), then mcp.run(transport=&#8221;stdio&#8221;).<\/p>\n<p><strong>3) Run the server<\/strong><\/p>\n<pre>uv run mini_mcp_server.py<\/pre>\n<p>At this point, your server is listening via stdio for an MCP-compatible host\/client.<\/p>\n<p><strong>Connect it to a client (example: Claude Desktop config)<\/strong><br \/>\nMany people test MCP servers by connecting them to a desktop host that can discover tools. The MCP server tutorial shows how to add an MCP server entry in the Claude Desktop config JSON, including command and args.<\/p>\n<p>A typical config looks like this shape:<\/p>\n<pre>{\r\n\"mcpServers\": {\r\n\"mini-tools\": {\r\n\"command\": \"uv\",\r\n\"args\": [\"--directory\", \"\/ABSOLUTE\/PATH\/TO\/FOLDER\", \"run\", \"mini_mcp_server.py\"]\r\n}\r\n}\r\n}<\/pre>\n<p>After you restart the host, it should discover the tools exposed by your server.<\/p>\n<p><strong>What the client actually does (in one short checklist)<\/strong><\/p>\n<p>When you ask the AI something like \u201cSay hi to Ashish and add 2 + 5\u201d, the client typically:<\/p>\n<ol>\n<li>Calls <strong>list tools<\/strong> on the MCP server<\/li>\n<li>Sends your message to the LLM <strong>along with tool descriptions<\/strong><\/li>\n<li>The LLM decides whether it needs a tool call<\/li>\n<li>The client executes the tool via MCP<\/li>\n<li>Tool results are returned to the LLM<\/li>\n<li>The final response is shown to the user<\/li>\n<\/ol>\n<p>This is the key idea: the model isn\u2019t \u201cdirectly calling your database.\u201d The client is orchestrating, and MCP standardizes that connection.<\/p>\n<p><strong>Conclusion<\/strong><\/p>\n<p>MCP is a practical shift in how we connect AI to reality.<\/p>\n<p>Instead of bolting on integrations one by one, MCP gives you a shared protocol where:<\/p>\n<ul>\n<li>Servers expose tools and data in a consistent way<\/li>\n<li>Clients (AI apps) discover and use them reliably<\/li>\n<li>The model gets better answers because it can pull real context and take real actions<\/li>\n<li>Security and governance can be designed into the workflow from the start<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Introduction If you\u2019ve built anything around LLMs &#8211; chatbots, internal assistants, developer tools\u2014you\u2019ve probably hit the same wall: the model is smart, but it\u2019s \u201ctrapped.\u201d It can\u2019t directly fetch the latest numbers from your database, read files from your system, or trigger real workflows unless you wire everything up manually. That\u2019s where MCP (Model Context [&hellip;]<\/p>\n","protected":false},"author":2066,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":47},"categories":[5871],"tags":[7637],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77185"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/2066"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=77185"}],"version-history":[{"count":4,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77185\/revisions"}],"predecessor-version":[{"id":77290,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77185\/revisions\/77290"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=77185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=77185"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=77185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}