Build a Custom MCP Server
The Model Context Protocol (MCP) is an open standard created by Anthropic that defines a universal way for AI assistants to connect to external tools, data sources, and services. Think of it as USB-C for AI: any MCP-compatible assistant (Claude, OpenAI models, Cursor, etc.) can connect to any MCP server without custom integration code.
Instead of writing LangChain tools, OpenAI function definitions, and Anthropic tool descriptions all separately, you write one MCP server — and every MCP-compatible client can use it automatically.
In this tutorial you will build a fully functional MCP server that exposes a task management system as tools and resources. You will implement it in both Python and TypeScript and integrate it with Claude Desktop.
What You'll Learn#
- How MCP works: the protocol primitives (tools, resources, prompts)
- How to build an MCP server in Python using the
mcppackage - How to build an MCP server in TypeScript using
@modelcontextprotocol/sdk - How to define tools with proper schemas for AI invocation
- How to expose resources (read-only data) alongside tools
- How to connect your server to Claude Desktop for testing
Prerequisites#
- Python 3.10+ for the Python implementation
- Node.js 18+ for the TypeScript implementation
- Claude Desktop installed (for integration testing)
- Basic understanding of MCP servers and the AI agent framework ecosystem
- Familiarity with JSON schema
Step 1: Understanding MCP Primitives#
MCP servers expose three types of capabilities:
| Primitive | Description | Example |
|---|---|---|
| Tools | Functions the AI can call with arguments | create_task(title, priority) |
| Resources | Read-only data the AI can access | tasks://all — list of all tasks |
| Prompts | Reusable prompt templates with parameters | task_summary_prompt(date_range) |
Your server can expose any combination of these. The AI client discovers them automatically through the MCP handshake.
Step 2: Python MCP Server#
Install the Python SDK:
mkdir mcp-task-server && cd mcp-task-server
python -m venv .venv && source .venv/bin/activate
pip install mcp
Now build the complete server in server.py:
# server.py
import json
import asyncio
from datetime import datetime, date
from typing import Any
import mcp.server.stdio
import mcp.types as types
from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
# In-memory task store (replace with a real database in production)
tasks: dict[str, dict] = {}
task_counter = 0
def generate_task_id() -> str:
global task_counter
task_counter += 1
return f"task_{task_counter:04d}"
# Initialize the MCP server
server = Server("task-manager")
@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
"""Return the list of tools this server provides."""
return [
types.Tool(
name="create_task",
description="Create a new task with a title, description, priority, and optional due date.",
inputSchema={
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "Short, descriptive task title.",
},
"description": {
"type": "string",
"description": "Detailed description of what the task involves.",
},
"priority": {
"type": "string",
"enum": ["low", "medium", "high", "urgent"],
"description": "Task priority level.",
"default": "medium",
},
"due_date": {
"type": "string",
"format": "date",
"description": "Due date in YYYY-MM-DD format (optional).",
},
},
"required": ["title"],
},
),
types.Tool(
name="list_tasks",
description="List all tasks, optionally filtered by status or priority.",
inputSchema={
"type": "object",
"properties": {
"status": {
"type": "string",
"enum": ["pending", "in_progress", "completed", "all"],
"default": "all",
"description": "Filter tasks by status.",
},
"priority": {
"type": "string",
"enum": ["low", "medium", "high", "urgent", "all"],
"default": "all",
"description": "Filter tasks by priority.",
},
},
},
),
types.Tool(
name="update_task",
description="Update an existing task's status, description, or priority.",
inputSchema={
"type": "object",
"properties": {
"task_id": {
"type": "string",
"description": "The task ID to update (e.g., 'task_0001').",
},
"status": {
"type": "string",
"enum": ["pending", "in_progress", "completed"],
"description": "New status for the task.",
},
"priority": {
"type": "string",
"enum": ["low", "medium", "high", "urgent"],
"description": "New priority for the task.",
},
"description": {
"type": "string",
"description": "Updated task description.",
},
},
"required": ["task_id"],
},
),
types.Tool(
name="delete_task",
description="Permanently delete a task by its ID.",
inputSchema={
"type": "object",
"properties": {
"task_id": {
"type": "string",
"description": "The task ID to delete.",
},
},
"required": ["task_id"],
},
),
types.Tool(
name="get_task_stats",
description="Get a summary of task statistics: counts by status and priority.",
inputSchema={"type": "object", "properties": {}},
),
]
@server.call_tool()
async def handle_call_tool(
name: str, arguments: dict | None
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
"""Execute a tool call and return results."""
args = arguments or {}
if name == "create_task":
task_id = generate_task_id()
task = {
"id": task_id,
"title": args["title"],
"description": args.get("description", ""),
"priority": args.get("priority", "medium"),
"status": "pending",
"due_date": args.get("due_date"),
"created_at": datetime.now().isoformat(),
}
tasks[task_id] = task
return [types.TextContent(
type="text",
text=f"Created task {task_id}: '{task['title']}' (priority: {task['priority']})",
)]
elif name == "list_tasks":
status_filter = args.get("status", "all")
priority_filter = args.get("priority", "all")
filtered = [
t for t in tasks.values()
if (status_filter == "all" or t["status"] == status_filter)
and (priority_filter == "all" or t["priority"] == priority_filter)
]
if not filtered:
return [types.TextContent(type="text", text="No tasks found matching the criteria.")]
task_list = "\n".join(
f"[{t['id']}] {t['title']} — {t['status']} ({t['priority']} priority)"
+ (f" — due {t['due_date']}" if t.get("due_date") else "")
for t in sorted(filtered, key=lambda x: x["created_at"])
)
return [types.TextContent(type="text", text=f"Found {len(filtered)} tasks:\n\n{task_list}")]
elif name == "update_task":
task_id = args["task_id"]
if task_id not in tasks:
return [types.TextContent(type="text", text=f"Error: Task '{task_id}' not found.")]
task = tasks[task_id]
updates = []
for field in ("status", "priority", "description"):
if field in args:
old_val = task.get(field, "")
task[field] = args[field]
updates.append(f"{field}: '{old_val}' → '{args[field]}'")
task["updated_at"] = datetime.now().isoformat()
return [types.TextContent(
type="text",
text=f"Updated task {task_id}:\n" + "\n".join(updates),
)]
elif name == "delete_task":
task_id = args["task_id"]
if task_id not in tasks:
return [types.TextContent(type="text", text=f"Error: Task '{task_id}' not found.")]
deleted = tasks.pop(task_id)
return [types.TextContent(
type="text",
text=f"Deleted task {task_id}: '{deleted['title']}'",
)]
elif name == "get_task_stats":
total = len(tasks)
if total == 0:
return [types.TextContent(type="text", text="No tasks in the system.")]
by_status = {}
by_priority = {}
for task in tasks.values():
by_status[task["status"]] = by_status.get(task["status"], 0) + 1
by_priority[task["priority"]] = by_priority.get(task["priority"], 0) + 1
stats_text = f"Task Statistics (Total: {total})\n\nBy Status:\n"
stats_text += "\n".join(f" {s}: {c}" for s, c in by_status.items())
stats_text += "\n\nBy Priority:\n"
stats_text += "\n".join(f" {p}: {c}" for p, c in by_priority.items())
return [types.TextContent(type="text", text=stats_text)]
else:
raise ValueError(f"Unknown tool: {name}")
@server.list_resources()
async def handle_list_resources() -> list[types.Resource]:
"""Expose tasks as readable resources."""
return [
types.Resource(
uri="tasks://all",
name="All Tasks",
description="JSON dump of all tasks in the system",
mimeType="application/json",
),
types.Resource(
uri="tasks://pending",
name="Pending Tasks",
description="Tasks that have not been started yet",
mimeType="application/json",
),
]
@server.read_resource()
async def handle_read_resource(uri: str) -> str:
"""Return resource content when requested."""
if uri == "tasks://all":
return json.dumps(list(tasks.values()), indent=2)
elif uri == "tasks://pending":
pending = [t for t in tasks.values() if t["status"] == "pending"]
return json.dumps(pending, indent=2)
else:
raise ValueError(f"Unknown resource: {uri}")
async def main():
"""Run the MCP server using stdio transport."""
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="task-manager",
server_version="1.0.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
asyncio.run(main())
Step 3: TypeScript MCP Server#
For Node.js environments, the TypeScript SDK is the official option:
mkdir mcp-ts-server && cd mcp-ts-server
npm init -y
npm install @modelcontextprotocol/sdk
npm install -D typescript @types/node ts-node
npx tsc --init --module commonjs --target es2020 --strict
Create src/index.ts:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
Tool,
} from "@modelcontextprotocol/sdk/types.js";
interface Task {
id: string;
title: string;
status: "pending" | "in_progress" | "completed";
priority: "low" | "medium" | "high";
createdAt: string;
}
const tasks = new Map<string, Task>();
let taskCounter = 0;
const server = new Server(
{ name: "task-manager-ts", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "create_task",
description: "Create a new task",
inputSchema: {
type: "object" as const,
properties: {
title: { type: "string", description: "Task title" },
priority: {
type: "string",
enum: ["low", "medium", "high"],
default: "medium",
},
},
required: ["title"],
},
} as Tool,
{
name: "list_tasks",
description: "List all tasks",
inputSchema: {
type: "object" as const,
properties: {
status: {
type: "string",
enum: ["pending", "in_progress", "completed", "all"],
default: "all",
},
},
},
} as Tool,
],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "create_task") {
const id = `task_${String(++taskCounter).padStart(4, "0")}`;
const task: Task = {
id,
title: args?.title as string,
status: "pending",
priority: (args?.priority as Task["priority"]) || "medium",
createdAt: new Date().toISOString(),
};
tasks.set(id, task);
return {
content: [
{ type: "text", text: `Created task ${id}: "${task.title}" (${task.priority})` },
],
};
}
if (name === "list_tasks") {
const statusFilter = (args?.status as string) || "all";
const filtered = Array.from(tasks.values()).filter(
(t) => statusFilter === "all" || t.status === statusFilter
);
if (filtered.length === 0) {
return { content: [{ type: "text", text: "No tasks found." }] };
}
const list = filtered
.map((t) => `[${t.id}] ${t.title} — ${t.status} (${t.priority})`)
.join("\n");
return { content: [{ type: "text", text: `${filtered.length} tasks:\n${list}` }] };
}
throw new Error(`Unknown tool: ${name}`);
});
const transport = new StdioServerTransport();
await server.connect(transport);
Step 4: Connect to Claude Desktop#
Add your server to Claude Desktop's configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"task-manager": {
"command": "/path/to/your/.venv/bin/python",
"args": ["/path/to/mcp-task-server/server.py"],
"env": {}
}
}
}
Restart Claude Desktop. You will see the tools icon in the Claude interface. Ask Claude: "Create a high priority task to review the Q4 financial report by tomorrow" — Claude will call your server's create_task tool automatically.
What's Next#
You have built a complete MCP server that can connect to any MCP-compatible AI client. Next steps:
- Connect from code: Learn how to connect AI agents to MCP servers programmatically from Python
- Explore the ecosystem: Read the MCP server glossary entry to understand the full ecosystem
- OpenAI Agents SDK tools: Compare MCP tools with OpenAI Agents SDK tool calling
- LangChain tools: See how LangChain handles tools for comparison
- Tool use theory: Review tool use patterns to understand how agents decide which tools to call
Frequently Asked Questions#
What is the difference between stdio and HTTP transport in MCP?
Stdio transport (used in this tutorial) runs the server as a child process that communicates via standard input/output. It is simple and secure — the server has no network exposure. HTTP transport runs the server as a web service with SSE (Server-Sent Events) for streaming. Use stdio for local desktop integrations; use HTTP for remote servers or multi-client deployments.
Can multiple Claude Desktop sessions share one MCP server?
With stdio transport, each Claude Desktop session spawns a separate server process — they do not share state. For shared state (like a real task database), use HTTP transport with a persistent server process, or use a database backend that all server instances connect to.
How do I debug an MCP server that is not working in Claude Desktop?
Check the Claude Desktop logs at ~/Library/Logs/Claude/mcp-server-*.log (macOS) or %APPDATA%/Claude/logs/ (Windows). The logs show server startup errors, tool call details, and any Python/Node exceptions. You can also test your server directly using the mcp CLI: mcp dev server.py.
Should I use Python or TypeScript for my MCP server?
Both are fully supported. Use Python if your backend logic is in Python (database queries, ML pipelines, data processing). Use TypeScript/Node.js if your server needs to integrate with JavaScript ecosystems or you want to deploy on Vercel Edge Functions. The protocol is identical — clients cannot tell which language the server uses.
Can I publish my MCP server for others to use?
Yes. The MCP ecosystem has a growing registry at mcp.so and on GitHub. Package your Python server with uv or a Docker image. For TypeScript servers, publish to npm with npx support so users can run it without installation: "command": "npx", "args": ["your-mcp-server"].