🤖AI Agents Guide
TutorialsComparisonsReviewsExamplesIntegrationsUse CasesTemplatesGlossary
Get Started
🤖AI Agents Guide

Your comprehensive resource for understanding, building, and implementing AI Agents.

Learn

  • Tutorials
  • Glossary
  • Use Cases
  • Examples

Compare

  • Tool Comparisons
  • Reviews
  • Integrations
  • Templates

Company

  • About
  • Contact
  • Privacy Policy

© 2026 AI Agents Guide. All rights reserved.

Home/Glossary/What Is an MCP Server?
Glossary9 min read

What Is an MCP Server?

An MCP server is a lightweight program that exposes tools, resources, and prompts to AI agents via the Model Context Protocol (MCP) — a standardized interface that lets any MCP-compatible AI client connect to external data sources and capabilities without custom integration code.

Close-up of server hardware representing MCP protocol infrastructure
Photo by Lars Kienle on Unsplash
By AI Agents Guide Team•February 28, 2026

Term Snapshot

Also known as: Model Context Protocol Server, MCP Tool Server, AI Context Server

Related terms: What Is an MCP Client?, What Is MCP Transport?, What Is the MCP SDK?, What Is MCP Authentication?

Table of Contents

  1. Quick Definition
  2. Why MCP Servers Matter
  3. MCP Architecture: Host, Client, Server
  4. What MCP Servers Expose
  5. Tools
  6. Resources
  7. Prompts
  8. How an MCP Server Works
  9. Building an MCP Server
  10. Python Example
  11. TypeScript Example
  12. MCP Server Transports
  13. stdio (Local)
  14. HTTP + SSE (Remote)
  15. Streamable HTTP (Production)
  16. Popular MCP Servers
  17. MCP Server vs. Direct API Integration
  18. Common Misconceptions
  19. Related Terms
  20. Frequently Asked Questions
  21. What is an MCP server?
  22. What is the difference between an MCP server and an API?
  23. How do I build an MCP server?
  24. What are the most popular MCP servers?
  25. Do MCP servers work with AI models other than Claude?
Laptop with code representing MCP server development
Photo by Emile Perron on Unsplash

What Is an MCP Server?

Quick Definition#

An MCP server is a program that implements the server side of the Model Context Protocol (MCP), exposing tools, resources, and prompts that AI agents can call through a standardized interface. Rather than building custom integrations for each AI application, an MCP server acts as a universal adapter — any MCP-compatible client can connect to any MCP server and use its capabilities immediately.

If you are new to MCP, start with Model Context Protocol (MCP) for the foundational concepts, then return here for the server-specific details. For how agents use these tools, see Tool Calling. Browse all AI agent terms in the AI Agent Glossary.

Why MCP Servers Matter#

Before MCP, integrating an AI agent with a data source or tool required custom code for each combination:

  • A Claude integration with Slack required Claude-specific Slack code
  • A GPT integration with the same Slack required different GPT-specific Slack code
  • Each new AI model or tool pair required a new integration

MCP inverts this. An MCP server for Slack is written once. Any MCP-compatible AI client — Claude Desktop, Cursor, VS Code Copilot, or a custom agent — can connect to it without modification.

This is why the MCP ecosystem grew so rapidly after Anthropic's November 2024 release: developers could build one server and have it work across all MCP-compatible clients.

MCP Architecture: Host, Client, Server#

The MCP specification defines three roles:

MCP Host: The application running the AI agent — Claude Desktop, Cursor, or a custom application built with an agent SDK.

MCP Client: A component within the host that maintains a 1:1 connection with an MCP server and handles the protocol communication.

MCP Server: A standalone program (or process) that exposes capabilities and responds to client requests.

MCP Host (Claude Desktop)
├── MCP Client ─── stdio ──→ MCP Server (filesystem)
├── MCP Client ─── stdio ──→ MCP Server (GitHub)
└── MCP Client ─── HTTP ──→ MCP Server (remote database)

One host can connect to many MCP servers simultaneously. The agent uses tools from any connected server through the same interface.

What MCP Servers Expose#

Tools#

Tools are functions the AI can call with arguments. They enable agents to take actions in the world — the most common MCP capability.

Examples:

  • read_file(path) — read a local file
  • query_database(sql) — execute a database query
  • send_slack_message(channel, text) — post to Slack
  • browser_navigate(url) — navigate a browser

Resources#

Resources are data sources the AI can read, similar to files or pages. Unlike tools, resources are identified by URIs and can be subscribed to for change notifications.

Examples:

  • file:///Users/alice/project/README.md — a local file
  • database://mydb/schema — a database schema
  • github://repo/issues/123 — a GitHub issue

Prompts#

Prompts are reusable templates that can be injected into the conversation. MCP servers can expose pre-built prompt workflows for common tasks.

Examples:

  • explain_code_review — a template for code review conversations
  • summarize_database — a template for database analysis tasks

How an MCP Server Works#

When an MCP client connects to a server:

  1. Initialization: The client sends a initialize request; the server responds with its protocol version and capabilities
  2. Discovery: The client calls tools/list, resources/list, or prompts/list to discover available capabilities
  3. Invocation: When the AI decides to use a tool, the client sends a tools/call request with arguments
  4. Response: The server executes the tool and returns structured results
  5. Continuation: Results are injected into the AI's context for the next reasoning step

Building an MCP Server#

Python Example#

The official mcp Python package makes server creation straightforward:

from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types
import httpx

server = Server("weather-server")

@server.list_tools()
async def list_tools() -> list[types.Tool]:
    return [
        types.Tool(
            name="get_weather",
            description="Get current weather for a city",
            inputSchema={
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "City name"}
                },
                "required": ["city"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
    if name == "get_weather":
        city = arguments["city"]
        async with httpx.AsyncClient() as client:
            response = await client.get(
                f"https://api.weatherapi.com/v1/current.json",
                params={"key": "YOUR_API_KEY", "q": city}
            )
        data = response.json()
        return [types.TextContent(
            type="text",
            text=f"Weather in {city}: {data['current']['temp_c']}°C, "
                 f"{data['current']['condition']['text']}"
        )]

# Run on stdio (standard transport for local MCP servers)
async def main():
    async with stdio_server() as (read_stream, write_stream):
        await server.run(read_stream, write_stream,
                        server.create_initialization_options())

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

TypeScript Example#

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js";

const server = new Server({ name: "notes-server", version: "1.0.0" },
  { capabilities: { tools: {} } });

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [{
    name: "create_note",
    description: "Create a new note",
    inputSchema: {
      type: "object",
      properties: {
        title: { type: "string" },
        content: { type: "string" }
      },
      required: ["title", "content"]
    }
  }]
}));

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "create_note") {
    const { title, content } = request.params.arguments as { title: string; content: string };
    // Save the note...
    return { content: [{ type: "text", text: `Note "${title}" created.` }] };
  }
  throw new Error(`Unknown tool: ${request.params.name}`);
});

const transport = new StdioServerTransport();
await server.connect(transport);

MCP Server Transports#

stdio (Local)#

The server runs as a subprocess spawned by the host application. Communication uses stdin/stdout pipes. This is the standard approach for local MCP servers — simple, no network setup, works on any OS.

// Claude Desktop config (claude_desktop_config.json)
{
  "mcpServers": {
    "my-server": {
      "command": "python",
      "args": ["/path/to/my_mcp_server.py"]
    }
  }
}

HTTP + SSE (Remote)#

The server runs as a persistent HTTP service, accepting connections from multiple clients. Uses Server-Sent Events for streaming responses. Required for remote MCP servers shared across teams or deployed to cloud infrastructure.

Streamable HTTP (Production)#

Added in the 2025 MCP specification update, Streamable HTTP provides more efficient connection management for production deployments with many concurrent clients.

Popular MCP Servers#

ServerCategoryUse Case
Playwright MCP (Microsoft)BrowserWeb automation, testing
filesystem (Anthropic)FilesLocal file read/write
GitHub (Anthropic)DevToolsRepo management, PR operations
PostgreSQLDatabaseSQL queries and schema inspection
SQLiteDatabaseEmbedded database access
SlackProductivityMessage sending, channel reading
Google MapsLocationPlaces, directions, geocoding
Brave SearchSearchWeb search capabilities
Memory (Anthropic)StatePersistent key-value storage
Fetch (Anthropic)WebURL fetching and content extraction

MCP Server vs. Direct API Integration#

DimensionMCP ServerDirect API Integration
Development effortOnce — works with all MCP clientsPer client — custom code for each AI app
Client compatibilityAny MCP-compatible clientOnly clients with matching code
MaintenanceSingle codebaseMultiple codebases
Schema descriptionAutomatic — MCP handles discoveryManual — each client needs documentation
Setup complexityRequires MCP runtimeSimpler for single-client use
Best forReusable tools across multiple agentsOne-off integrations

Common Misconceptions#

Misconception: MCP servers require complex infrastructure Local MCP servers running on stdio are just Python or Node.js scripts. Many production-grade MCP servers are under 200 lines of code. The complexity is in what the server does (calling APIs, querying databases), not the MCP protocol itself.

Misconception: MCP servers only work with Claude MCP is an open protocol. Cursor, VS Code Copilot, Continue.dev, Zed, and many custom agents support MCP. Any application that implements the MCP client specification can use any MCP server.

Misconception: MCP replaces direct tool calling MCP standardizes how tools are exposed and discovered. The tool-calling mechanism (the AI deciding to use a tool and the host executing it) still works the same way. MCP is an interoperability layer, not a replacement for tool-calling semantics.

Related Terms#

  • Model Context Protocol (MCP) — The protocol an MCP server implements
  • Tool Calling — How AI agents invoke MCP server tools
  • Agent SDK — Frameworks that support MCP client connections
  • Agentic Workflow — Multi-step workflows using MCP servers
  • AI Agents — The agents that connect to MCP servers
  • Understanding AI Agent Architecture — Architecture tutorial covering tool integration and MCP
  • CrewAI vs LangChain — Comparing frameworks with MCP server support

Frequently Asked Questions#

What is an MCP server?#

An MCP server is a program that implements the server side of the Model Context Protocol, exposing tools, resources, and prompts to AI agents through a standardized interface. Any MCP-compatible AI client can connect to any MCP server without custom integration code.

What is the difference between an MCP server and an API?#

A traditional API requires custom client code per AI application. An MCP server implements a standard protocol so any MCP-compatible client connects to it without modification. MCP standardizes discovery (clients can automatically learn what tools exist) and invocation, similar to how HTTP standardized web communication.

How do I build an MCP server?#

Use the official mcp Python package or @modelcontextprotocol/sdk TypeScript package. Define tools with name, description, and input schema, implement the tool execution logic, and run the server on stdio or HTTP. Most simple MCP servers can be built in under 100 lines of code.

What are the most popular MCP servers?#

The most-used MCP servers include Microsoft's Playwright MCP (browser automation), Anthropic's filesystem server (local files), the GitHub MCP server (repository operations), PostgreSQL and SQLite servers (database access), and the Slack server (messaging). The ecosystem has 10,000+ servers as of 2025.

Do MCP servers work with AI models other than Claude?#

Yes. MCP is an open protocol supported by Cursor, VS Code Copilot, Continue.dev, Zed, and many custom agent frameworks. Any application implementing the MCP client specification can use any MCP server, regardless of which AI model powers it.

Tags:
mcparchitecturefundamentals

Related Glossary Terms

What Is an MCP Client?

An MCP client is the host application that connects to one or more MCP servers to gain access to tools, resources, and prompts. Examples include Claude Desktop, VS Code extensions, Cursor, and custom AI agents built with the MCP SDK.

What Is Few-Shot Prompting?

Few-shot prompting is a technique where a small number of input-output examples are included in a prompt to guide an LLM to produce responses in a specific format, style, or reasoning pattern — enabling rapid adaptation to new tasks without fine-tuning or retraining.

What Is MCP Transport?

MCP transport is the communication layer that carries messages between an MCP client and an MCP server. The three transports are stdio (local subprocess), HTTP with Server-Sent Events, and WebSocket — each suited for different deployment scenarios.

What Is a Multimodal AI Agent?

A multimodal AI agent is an AI system that perceives and processes multiple input modalities — text, images, audio, video, and structured data — enabling tasks that require cross-modal reasoning, understanding, and action beyond what text-only agents can handle.

← Back to Glossary