What Is an MCP Server?
Quick Definition#
An MCP server is a program that implements the server side of the Model Context Protocol (MCP), exposing tools, resources, and prompts that AI agents can call through a standardized interface. Rather than building custom integrations for each AI application, an MCP server acts as a universal adapter — any MCP-compatible client can connect to any MCP server and use its capabilities immediately.
If you are new to MCP, start with Model Context Protocol (MCP) for the foundational concepts, then return here for the server-specific details. For how agents use these tools, see Tool Calling. Browse all AI agent terms in the AI Agent Glossary.
Why MCP Servers Matter#
Before MCP, integrating an AI agent with a data source or tool required custom code for each combination:
- A Claude integration with Slack required Claude-specific Slack code
- A GPT integration with the same Slack required different GPT-specific Slack code
- Each new AI model or tool pair required a new integration
MCP inverts this. An MCP server for Slack is written once. Any MCP-compatible AI client — Claude Desktop, Cursor, VS Code Copilot, or a custom agent — can connect to it without modification.
This is why the MCP ecosystem grew so rapidly after Anthropic's November 2024 release: developers could build one server and have it work across all MCP-compatible clients.
MCP Architecture: Host, Client, Server#
The MCP specification defines three roles:
MCP Host: The application running the AI agent — Claude Desktop, Cursor, or a custom application built with an agent SDK.
MCP Client: A component within the host that maintains a 1:1 connection with an MCP server and handles the protocol communication.
MCP Server: A standalone program (or process) that exposes capabilities and responds to client requests.
MCP Host (Claude Desktop)
├── MCP Client ─── stdio ──→ MCP Server (filesystem)
├── MCP Client ─── stdio ──→ MCP Server (GitHub)
└── MCP Client ─── HTTP ──→ MCP Server (remote database)
One host can connect to many MCP servers simultaneously. The agent uses tools from any connected server through the same interface.
What MCP Servers Expose#
Tools#
Tools are functions the AI can call with arguments. They enable agents to take actions in the world — the most common MCP capability.
Examples:
read_file(path)— read a local filequery_database(sql)— execute a database querysend_slack_message(channel, text)— post to Slackbrowser_navigate(url)— navigate a browser
Resources#
Resources are data sources the AI can read, similar to files or pages. Unlike tools, resources are identified by URIs and can be subscribed to for change notifications.
Examples:
file:///Users/alice/project/README.md— a local filedatabase://mydb/schema— a database schemagithub://repo/issues/123— a GitHub issue
Prompts#
Prompts are reusable templates that can be injected into the conversation. MCP servers can expose pre-built prompt workflows for common tasks.
Examples:
explain_code_review— a template for code review conversationssummarize_database— a template for database analysis tasks
How an MCP Server Works#
When an MCP client connects to a server:
- Initialization: The client sends a
initializerequest; the server responds with its protocol version and capabilities - Discovery: The client calls
tools/list,resources/list, orprompts/listto discover available capabilities - Invocation: When the AI decides to use a tool, the client sends a
tools/callrequest with arguments - Response: The server executes the tool and returns structured results
- Continuation: Results are injected into the AI's context for the next reasoning step
Building an MCP Server#
Python Example#
The official mcp Python package makes server creation straightforward:
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types
import httpx
server = Server("weather-server")
@server.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="get_weather",
description="Get current weather for a city",
inputSchema={
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "get_weather":
city = arguments["city"]
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.weatherapi.com/v1/current.json",
params={"key": "YOUR_API_KEY", "q": city}
)
data = response.json()
return [types.TextContent(
type="text",
text=f"Weather in {city}: {data['current']['temp_c']}°C, "
f"{data['current']['condition']['text']}"
)]
# Run on stdio (standard transport for local MCP servers)
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream,
server.create_initialization_options())
if __name__ == "__main__":
import asyncio
asyncio.run(main())
TypeScript Example#
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js";
const server = new Server({ name: "notes-server", version: "1.0.0" },
{ capabilities: { tools: {} } });
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [{
name: "create_note",
description: "Create a new note",
inputSchema: {
type: "object",
properties: {
title: { type: "string" },
content: { type: "string" }
},
required: ["title", "content"]
}
}]
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "create_note") {
const { title, content } = request.params.arguments as { title: string; content: string };
// Save the note...
return { content: [{ type: "text", text: `Note "${title}" created.` }] };
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
const transport = new StdioServerTransport();
await server.connect(transport);
MCP Server Transports#
stdio (Local)#
The server runs as a subprocess spawned by the host application. Communication uses stdin/stdout pipes. This is the standard approach for local MCP servers — simple, no network setup, works on any OS.
// Claude Desktop config (claude_desktop_config.json)
{
"mcpServers": {
"my-server": {
"command": "python",
"args": ["/path/to/my_mcp_server.py"]
}
}
}
HTTP + SSE (Remote)#
The server runs as a persistent HTTP service, accepting connections from multiple clients. Uses Server-Sent Events for streaming responses. Required for remote MCP servers shared across teams or deployed to cloud infrastructure.
Streamable HTTP (Production)#
Added in the 2025 MCP specification update, Streamable HTTP provides more efficient connection management for production deployments with many concurrent clients.
Popular MCP Servers#
| Server | Category | Use Case |
|---|---|---|
| Playwright MCP (Microsoft) | Browser | Web automation, testing |
| filesystem (Anthropic) | Files | Local file read/write |
| GitHub (Anthropic) | DevTools | Repo management, PR operations |
| PostgreSQL | Database | SQL queries and schema inspection |
| SQLite | Database | Embedded database access |
| Slack | Productivity | Message sending, channel reading |
| Google Maps | Location | Places, directions, geocoding |
| Brave Search | Search | Web search capabilities |
| Memory (Anthropic) | State | Persistent key-value storage |
| Fetch (Anthropic) | Web | URL fetching and content extraction |
MCP Server vs. Direct API Integration#
| Dimension | MCP Server | Direct API Integration |
|---|---|---|
| Development effort | Once — works with all MCP clients | Per client — custom code for each AI app |
| Client compatibility | Any MCP-compatible client | Only clients with matching code |
| Maintenance | Single codebase | Multiple codebases |
| Schema description | Automatic — MCP handles discovery | Manual — each client needs documentation |
| Setup complexity | Requires MCP runtime | Simpler for single-client use |
| Best for | Reusable tools across multiple agents | One-off integrations |
Common Misconceptions#
Misconception: MCP servers require complex infrastructure Local MCP servers running on stdio are just Python or Node.js scripts. Many production-grade MCP servers are under 200 lines of code. The complexity is in what the server does (calling APIs, querying databases), not the MCP protocol itself.
Misconception: MCP servers only work with Claude MCP is an open protocol. Cursor, VS Code Copilot, Continue.dev, Zed, and many custom agents support MCP. Any application that implements the MCP client specification can use any MCP server.
Misconception: MCP replaces direct tool calling MCP standardizes how tools are exposed and discovered. The tool-calling mechanism (the AI deciding to use a tool and the host executing it) still works the same way. MCP is an interoperability layer, not a replacement for tool-calling semantics.
Related Terms#
- Model Context Protocol (MCP) — The protocol an MCP server implements
- Tool Calling — How AI agents invoke MCP server tools
- Agent SDK — Frameworks that support MCP client connections
- Agentic Workflow — Multi-step workflows using MCP servers
- AI Agents — The agents that connect to MCP servers
- Understanding AI Agent Architecture — Architecture tutorial covering tool integration and MCP
- CrewAI vs LangChain — Comparing frameworks with MCP server support
Frequently Asked Questions#
What is an MCP server?#
An MCP server is a program that implements the server side of the Model Context Protocol, exposing tools, resources, and prompts to AI agents through a standardized interface. Any MCP-compatible AI client can connect to any MCP server without custom integration code.
What is the difference between an MCP server and an API?#
A traditional API requires custom client code per AI application. An MCP server implements a standard protocol so any MCP-compatible client connects to it without modification. MCP standardizes discovery (clients can automatically learn what tools exist) and invocation, similar to how HTTP standardized web communication.
How do I build an MCP server?#
Use the official mcp Python package or @modelcontextprotocol/sdk TypeScript package. Define tools with name, description, and input schema, implement the tool execution logic, and run the server on stdio or HTTP. Most simple MCP servers can be built in under 100 lines of code.
What are the most popular MCP servers?#
The most-used MCP servers include Microsoft's Playwright MCP (browser automation), Anthropic's filesystem server (local files), the GitHub MCP server (repository operations), PostgreSQL and SQLite servers (database access), and the Slack server (messaging). The ecosystem has 10,000+ servers as of 2025.
Do MCP servers work with AI models other than Claude?#
Yes. MCP is an open protocol supported by Cursor, VS Code Copilot, Continue.dev, Zed, and many custom agent frameworks. Any application implementing the MCP client specification can use any MCP server, regardless of which AI model powers it.