How to Connect an AI Agent to MCP Servers
Model Context Protocol (MCP) is the open standard that lets AI agents connect to external tools and data sources through a uniform interface. Building an MCP server is only half the picture — you also need to know how to write the client-side agent code that connects to those servers, discovers their tools, and incorporates them into a reasoning loop.
This tutorial shows you exactly how to do that in Python. You will connect a Claude-powered agent to local and remote MCP servers, discover available tools programmatically, and build a complete agentic loop that calls MCP tools as needed to answer user questions.
What You'll Build#
By the end of this tutorial you will have:
- A Python MCP client that connects to a local filesystem MCP server
- An agent loop that uses the
anthropicpackage to reason about which tools to call - A multi-server aggregator that exposes tools from several MCP servers as one unified list
- A remote HTTP MCP client using SSE transport
Prerequisites#
- Python 3.11 or later installed on your machine
pip install anthropic mcp— the two core packages needed- An Anthropic API key set as the
ANTHROPIC_API_KEYenvironment variable - Basic familiarity with tool use in LLMs and async Python
Step 1: Understanding MCP Client Architecture#
Before writing code, it helps to understand what the MCP client layer actually does.
When your agent code connects to an MCP server it performs a handshake to establish the session and negotiate capabilities. It then calls list_tools to discover what the server offers. At runtime, when the LLM returns a tool_use block in its response, your client code calls the matching MCP tool by name and returns the result to the LLM as a tool_result message. This repeats until the LLM produces a final text response with no tool calls.
The MCP server you build never needs to know which LLM is calling it. The protocol is symmetric and model-agnostic.
Agent Code (MCP Client)
│
├─ connect() ──────────────────► MCP Server (filesystem, DB, web)
├─ list_tools() ───────────────► [list of Tool objects with JSON schemas]
│
├─ LLM call with tool schemas
│ └─ response contains tool_use block
│
├─ call_tool(name, args) ──────► MCP Server executes tool
│ └─ returns TextContent
│
└─ LLM call with tool_result
└─ final text response
Step 2: Install MCP Python SDK#
Create a virtual environment and install the required packages:
mkdir mcp-agent-client && cd mcp-agent-client
python -m venv .venv && source .venv/bin/activate
pip install anthropic mcp
You will also want the official filesystem MCP server for local testing:
pip install mcp-server-filesystem
Step 3: Connect to a Local MCP Server#
The MCP Python SDK provides ClientSession and StdioServerParameters for connecting to a local server over stdio transport. The server runs as a subprocess; your client communicates with it over stdin/stdout.
# client.py
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def connect_filesystem_server():
"""Connect to the official MCP filesystem server."""
server_params = StdioServerParameters(
command="python",
args=["-m", "mcp_server_filesystem", "/tmp"], # Expose /tmp directory
env=None,
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Perform the MCP handshake
await session.initialize()
# Discover what this server offers
tools_response = await session.list_tools()
print("Available tools:")
for tool in tools_response.tools:
print(f" - {tool.name}: {tool.description}")
# Call a tool directly
result = await session.call_tool(
"list_directory",
arguments={"path": "/tmp"}
)
print("\nDirectory listing:")
for content in result.content:
print(content.text)
asyncio.run(connect_filesystem_server())
Run this and you will see the filesystem server's tools printed out, followed by the listing of /tmp. This confirms the client-server handshake is working.
Step 4: Discover Available Tools#
The list_tools response returns full JSON schema for each tool. You need to convert these schemas into the format Anthropic's API expects before passing them to the model. Here is a helper function:
from mcp.types import Tool
def mcp_tools_to_anthropic(tools: list[Tool]) -> list[dict]:
"""Convert MCP tool definitions to Anthropic tool format."""
return [
{
"name": tool.name,
"description": tool.description or "",
"input_schema": tool.inputSchema,
}
for tool in tools
]
The inputSchema field from MCP is already valid JSON Schema, so no transformation is needed — you can pass it directly as input_schema to the Anthropic tools array.
Step 5: Build the Agentic Loop#
Now combine the MCP client and the Anthropic API into a complete agentic loop. The loop continues until Claude produces a response with no tool calls:
# agent.py
import asyncio
import json
import anthropic
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def run_agent(user_message: str, server_params: StdioServerParameters):
"""Run an agent loop that uses MCP tools to answer a question."""
client = anthropic.Anthropic()
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Get tools from MCP server and convert to Anthropic format
tools_response = await session.list_tools()
tools = [
{
"name": t.name,
"description": t.description or "",
"input_schema": t.inputSchema,
}
for t in tools_response.tools
]
messages = [{"role": "user", "content": user_message}]
# Agentic loop: continues until stop_reason == "end_turn"
while True:
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=4096,
tools=tools,
messages=messages,
)
# Add assistant response to message history
messages.append({"role": "assistant", "content": response.content})
# Done — no more tool calls
if response.stop_reason == "end_turn":
for block in response.content:
if hasattr(block, "text"):
return block.text
return ""
# Process tool calls
if response.stop_reason == "tool_use":
tool_results = []
for block in response.content:
if block.type == "tool_use":
print(f"Calling tool: {block.name}({json.dumps(block.input)})")
# Execute the tool via MCP
result = await session.call_tool(
block.name,
arguments=block.input,
)
# Collect result text from all content blocks
result_text = "\n".join(
c.text for c in result.content
if hasattr(c, "text")
)
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result_text,
})
# Feed tool results back to the model
messages.append({"role": "user", "content": tool_results})
else:
break # Unexpected stop reason
return ""
if __name__ == "__main__":
params = StdioServerParameters(
command="python",
args=["-m", "mcp_server_filesystem", "/tmp"],
)
answer = asyncio.run(
run_agent(
"List the files in /tmp and tell me which ones are Python scripts.",
params
)
)
print("\nAgent answer:", answer)
Step 6: Connect to Multiple MCP Servers#
Production agents typically need more than one server — for example, a filesystem server, a web search server, and a database server simultaneously. The aggregator pattern maintains a session per server and merges their tool lists:
# multi_server_agent.py
import asyncio
import anthropic
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
class MultiServerAgent:
"""Agent that connects to multiple MCP servers concurrently."""
def __init__(self):
self.sessions: dict[str, ClientSession] = {}
self.tool_to_session: dict[str, str] = {}
self.all_tools: list[dict] = []
self.exit_stack = AsyncExitStack()
self.client = anthropic.Anthropic()
async def connect(self, name: str, params: StdioServerParameters):
"""Connect to one MCP server and register its tools."""
read, write = await self.exit_stack.enter_async_context(
stdio_client(params)
)
session = await self.exit_stack.enter_async_context(
ClientSession(read, write)
)
await session.initialize()
self.sessions[name] = session
tools_response = await session.list_tools()
for tool in tools_response.tools:
self.tool_to_session[tool.name] = name
self.all_tools.append({
"name": tool.name,
"description": tool.description or "",
"input_schema": tool.inputSchema,
})
print(f"Connected to '{name}': {len(tools_response.tools)} tools registered")
async def call_tool(self, tool_name: str, tool_input: dict) -> str:
"""Route a tool call to the correct MCP server."""
server_name = self.tool_to_session.get(tool_name)
if not server_name:
return f"Error: tool '{tool_name}' not found in any connected server"
session = self.sessions[server_name]
result = await session.call_tool(tool_name, arguments=tool_input)
return "\n".join(
c.text for c in result.content if hasattr(c, "text")
)
async def run(self, user_message: str) -> str:
"""Run the agentic loop across all connected servers."""
messages = [{"role": "user", "content": user_message}]
while True:
response = self.client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=4096,
tools=self.all_tools,
messages=messages,
)
messages.append({"role": "assistant", "content": response.content})
if response.stop_reason == "end_turn":
for block in response.content:
if hasattr(block, "text"):
return block.text
return ""
if response.stop_reason == "tool_use":
tool_results = []
for block in response.content:
if block.type == "tool_use":
result_text = await self.call_tool(block.name, block.input)
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result_text,
})
messages.append({"role": "user", "content": tool_results})
async def close(self):
await self.exit_stack.aclose()
async def main():
agent = MultiServerAgent()
# Connect to multiple servers in parallel
await agent.connect("filesystem", StdioServerParameters(
command="python", args=["-m", "mcp_server_filesystem", "/tmp"]
))
# Add more servers here: database, web search, etc.
answer = await agent.run("How many files are in /tmp?")
print("Answer:", answer)
await agent.close()
asyncio.run(main())
Step 7: Connect to Remote MCP Servers#
For MCP servers hosted over the network, use HTTP with SSE transport. The sse_client function handles the connection to a remote server URL:
# remote_client.py
import asyncio
from mcp import ClientSession
from mcp.client.sse import sse_client
async def connect_remote_server(server_url: str, api_key: str | None = None):
"""Connect to a remote MCP server via HTTP/SSE transport."""
headers = {}
if api_key:
headers["Authorization"] = f"Bearer {api_key}"
async with sse_client(
url=server_url,
headers=headers,
) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
server_info = session.server_info
print(f"Connected to: {server_info.name} v{server_info.version}")
tools_response = await session.list_tools()
print(f"Remote tools: {[t.name for t in tools_response.tools]}")
return tools_response.tools
# Connect to a hosted MCP service
asyncio.run(connect_remote_server(
server_url="https://mcp.example.com/sse",
api_key="your-service-api-key"
))
Remote MCP servers are increasingly available as hosted SaaS offerings. You connect to them the same way you connect to local servers — only the transport module changes from stdio_client to sse_client.
Testing and Debugging#
Log every tool call. In your agentic loop, print the tool name and arguments before calling session.call_tool. This makes it straightforward to trace what the agent decided to do and in what order.
Test the server independently. Use the mcp CLI to test a server without an agent: mcp dev your_server.py. This opens an interactive prompt where you can call tools manually and inspect their schemas.
Inspect raw protocol messages. Set the MCP_LOG_LEVEL=debug environment variable to see every JSON-RPC message exchanged. Useful when a tool call returns unexpected results or the handshake fails.
Add timeouts. Wrap session.call_tool in asyncio.wait_for(session.call_tool(...), timeout=30) to prevent a slow MCP server from hanging your agent indefinitely.
Handle tool errors gracefully. If a tool call raises an exception, return a descriptive error string as the tool_result content instead of propagating the exception. The LLM can then decide to retry with different arguments or skip that step.
What's Next#
You now have a working MCP client architecture that can connect to both local and remote MCP servers, discover their tools, and drive a complete agentic loop. From here, explore these related resources:
- Build a Custom MCP Server to create server-side tools that any MCP client can consume
- Model Context Protocol glossary entry for a deeper look at the protocol specification and ecosystem
- LangChain agent tutorial to compare how LangChain handles tool calling without MCP
- Tool use patterns to understand how LLMs decide which tools to invoke and when to stop
- LangChain in the agent directory for ecosystem context and framework comparisons
Frequently Asked Questions#
The FAQ section renders from the frontmatter faq array above.