Chapter 10: Building MCP Clients
The Other Side of the Wire
Most MCP tutorials focus on servers, because that’s where most developers start. But at some point, you’ll need to build a client—maybe you’re creating a custom AI application, embedding MCP support in an existing tool, or building something entirely new.
An MCP client is the component that connects to MCP servers, manages the protocol lifecycle, and makes server capabilities available to your application. If you’re building an AI-powered app that needs to use MCP tools, you’re building a client (and a host).
Client vs. Host: A Quick Refresher
Remember the architecture:
- Host = your application (UI, LLM integration, business logic)
- Client = the protocol connector (one per server)
- Server = the capability provider
When people say “build a client,” they usually mean “build a host that contains clients.” The SDK provides the client; you build the host around it.
TypeScript Client
Connecting to a stdio Server
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
// Create a transport that spawns the server
const transport = new StdioClientTransport({
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"],
});
// Create the client
const client = new Client({
name: "my-app",
version: "1.0.0",
});
// Connect (this performs the initialization handshake)
await client.connect(transport);
// Now you can use the client
const tools = await client.listTools();
console.log("Available tools:", tools.tools.map((t) => t.name));
// Call a tool
const result = await client.callTool({
name: "read_file",
arguments: { path: "/home/user/README.md" },
});
console.log("Result:", result.content);
// Clean up
await client.close();
Connecting to a Remote Server
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
const transport = new StreamableHTTPClientTransport(
new URL("https://my-server.example.com/mcp")
);
const client = new Client({
name: "my-app",
version: "1.0.0",
});
await client.connect(transport);
// Use exactly the same API as stdio
const tools = await client.listTools();
The beauty of MCP’s transport abstraction: your application code is identical regardless of whether the server is local or remote. Swap the transport, keep everything else.
Listing and Calling Tools
// Discover tools
const { tools } = await client.listTools();
for (const tool of tools) {
console.log(`${tool.name}: ${tool.description}`);
console.log(` Input: ${JSON.stringify(tool.inputSchema)}`);
if (tool.annotations) {
console.log(` Read-only: ${tool.annotations.readOnlyHint}`);
console.log(` Destructive: ${tool.annotations.destructiveHint}`);
}
}
// Call a tool
const result = await client.callTool({
name: "get_weather",
arguments: {
city: "London",
units: "celsius",
},
});
// Handle the result
if (result.isError) {
console.error("Tool error:", result.content);
} else {
for (const item of result.content) {
if (item.type === "text") {
console.log(item.text);
} else if (item.type === "image") {
// Handle image content
console.log(`Image: ${item.mimeType}, ${item.data.length} bytes`);
}
}
}
Reading Resources
// List available resources
const { resources } = await client.listResources();
for (const resource of resources) {
console.log(`${resource.name} (${resource.uri})`);
}
// Read a resource
const { contents } = await client.readResource({
uri: "file:///project/README.md",
});
for (const content of contents) {
if ("text" in content) {
console.log(content.text);
} else if ("blob" in content) {
console.log(`Binary data: ${content.blob.length} bytes`);
}
}
Using Prompts
// List prompts
const { prompts } = await client.listPrompts();
// Get a prompt with arguments
const { messages } = await client.getPrompt({
name: "code_review",
arguments: {
code: "function add(a, b) { return a + b; }",
language: "javascript",
},
});
// messages is an array of {role, content} ready for the LLM
console.log(messages);
Subscribing to Changes
// Listen for tool list changes
client.setNotificationHandler(
{ method: "notifications/tools/list_changed" },
async () => {
console.log("Tools changed! Re-fetching...");
const { tools } = await client.listTools();
// Update your application's tool list
}
);
// Subscribe to a resource
await client.subscribeResource({ uri: "file:///var/log/app.log" });
client.setNotificationHandler(
{ method: "notifications/resources/updated" },
async (notification) => {
const uri = notification.params.uri;
console.log(`Resource updated: ${uri}`);
const { contents } = await client.readResource({ uri });
// Process updated content
}
);
Python Client
Connecting to a stdio Server
import asyncio
from mcp.client.session import ClientSession
from mcp.client.stdio import stdio_client, StdioServerParameters
async def main():
server_params = StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/home/user"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# List tools
tools = await session.list_tools()
for tool in tools.tools:
print(f"{tool.name}: {tool.description}")
# Call a tool
result = await session.call_tool(
"read_file",
arguments={"path": "/home/user/README.md"},
)
print(result.content[0].text)
asyncio.run(main())
Connecting to a Remote Server
from mcp.client.session import ClientSession
from mcp.client.streamable_http import streamablehttp_client
async def main():
async with streamablehttp_client("https://my-server.example.com/mcp") as (
read,
write,
):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await session.list_tools()
# ... same API as stdio
Full Client Example
import asyncio
from mcp.client.session import ClientSession
from mcp.client.stdio import stdio_client, StdioServerParameters
async def run_client():
server_params = StdioServerParameters(
command="uvx",
args=["my-weather-server"],
env={"WEATHER_API_KEY": "sk-..."},
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Discover capabilities
tools = await session.list_tools()
print(f"Found {len(tools.tools)} tools:")
for tool in tools.tools:
print(f" - {tool.name}: {tool.description}")
resources = await session.list_resources()
print(f"\nFound {len(resources.resources)} resources:")
for resource in resources.resources:
print(f" - {resource.name} ({resource.uri})")
prompts = await session.list_prompts()
print(f"\nFound {len(prompts.prompts)} prompts:")
for prompt in prompts.prompts:
print(f" - {prompt.name}: {prompt.description}")
# Use a tool
result = await session.call_tool(
"get_weather",
arguments={"city": "Tokyo", "units": "celsius"},
)
if result.isError:
print(f"Error: {result.content[0].text}")
else:
print(f"\n{result.content[0].text}")
asyncio.run(run_client())
Building a Host: The Full Picture
A real host does more than just call tools. It orchestrates the LLM, manages multiple MCP clients, handles user interaction, and enforces security policies. Here’s a simplified but complete example:
import Anthropic from "@anthropic-ai/sdk";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
class SimpleHost {
private anthropic = new Anthropic();
private clients: Map<string, Client> = new Map();
private allTools: Array<{ serverName: string; tool: any }> = [];
async addServer(name: string, command: string, args: string[]) {
const transport = new StdioClientTransport({ command, args });
const client = new Client({ name: "simple-host", version: "1.0.0" });
await client.connect(transport);
this.clients.set(name, client);
// Collect tools from this server
const { tools } = await client.listTools();
for (const tool of tools) {
this.allTools.push({ serverName: name, tool });
}
console.log(`Connected to ${name}: ${tools.length} tools`);
}
async chat(userMessage: string): Promise<string> {
// Convert MCP tools to Anthropic tool format
const anthropicTools = this.allTools.map(({ serverName, tool }) => ({
name: `${serverName}__${tool.name}`,
description: tool.description || "",
input_schema: tool.inputSchema,
}));
// Send to Claude
let messages: Anthropic.MessageParam[] = [
{ role: "user", content: userMessage },
];
while (true) {
const response = await this.anthropic.messages.create({
model: "claude-sonnet-4-5-20250929",
max_tokens: 4096,
tools: anthropicTools,
messages,
});
// Check if Claude wants to use a tool
if (response.stop_reason === "tool_use") {
const toolUseBlocks = response.content.filter(
(block): block is Anthropic.ToolUseBlock => block.type === "tool_use"
);
const toolResults: Anthropic.ToolResultBlockParam[] = [];
for (const toolUse of toolUseBlocks) {
// Parse the server name from the tool name
const [serverName, ...toolNameParts] = toolUse.name.split("__");
const toolName = toolNameParts.join("__");
const client = this.clients.get(serverName);
if (!client) {
toolResults.push({
type: "tool_result",
tool_use_id: toolUse.id,
content: `Error: Unknown server '${serverName}'`,
is_error: true,
});
continue;
}
// Execute the tool via MCP
const result = await client.callTool({
name: toolName,
arguments: toolUse.input as Record<string, unknown>,
});
// Convert MCP result to Anthropic format
const textContent = result.content
.filter((c): c is { type: "text"; text: string } => c.type === "text")
.map((c) => c.text)
.join("\n");
toolResults.push({
type: "tool_result",
tool_use_id: toolUse.id,
content: textContent,
is_error: result.isError || false,
});
}
// Add assistant message and tool results to conversation
messages.push({ role: "assistant", content: response.content });
messages.push({ role: "user", content: toolResults });
// Continue the loop to get Claude's response to the tool results
continue;
}
// No more tool use, return the text response
const textBlocks = response.content.filter(
(block): block is Anthropic.TextBlock => block.type === "text"
);
return textBlocks.map((b) => b.text).join("\n");
}
}
async close() {
for (const [name, client] of this.clients) {
await client.close();
console.log(`Disconnected from ${name}`);
}
}
}
// Usage
const host = new SimpleHost();
await host.addServer("files", "npx", [
"-y",
"@modelcontextprotocol/server-filesystem",
"/home/user/project",
]);
await host.addServer("github", "npx", [
"-y",
"@modelcontextprotocol/server-github",
]);
const response = await host.chat("What files are in the project directory?");
console.log(response);
await host.close();
This example shows the full flow:
- Connect to multiple MCP servers
- Collect tools from all servers
- Convert MCP tools to the LLM’s tool format
- Send user messages to the LLM with available tools
- When the LLM wants to use a tool, route the call to the right MCP server
- Feed tool results back to the LLM
- Repeat until the LLM produces a final text response
Managing Multiple Servers
In production, you’ll likely manage multiple MCP connections:
class McpManager {
private clients: Map<string, Client> = new Map();
async connect(name: string, config: ServerConfig) {
const transport = this.createTransport(config);
const client = new Client({
name: "my-app",
version: "1.0.0",
});
// Set up notification handlers before connecting
client.setNotificationHandler(
{ method: "notifications/tools/list_changed" },
async () => {
console.log(`Tools changed on ${name}`);
await this.refreshTools(name);
}
);
await client.connect(transport);
this.clients.set(name, client);
}
async getAllTools(): Promise<Map<string, Tool[]>> {
const result = new Map();
for (const [name, client] of this.clients) {
const { tools } = await client.listTools();
result.set(name, tools);
}
return result;
}
async callTool(serverName: string, toolName: string, args: any) {
const client = this.clients.get(serverName);
if (!client) throw new Error(`No server: ${serverName}`);
return client.callTool({ name: toolName, arguments: args });
}
async disconnectAll() {
for (const client of this.clients.values()) {
await client.close();
}
this.clients.clear();
}
}
Handling Server-Initiated Requests
Servers can request things from clients: sampling (LLM completions), elicitation (user input), and roots (workspace info). Your client needs to handle these:
const client = new Client({
name: "my-app",
version: "1.0.0",
capabilities: {
sampling: {}, // We support sampling
roots: { // We support roots
listChanged: true,
},
},
});
// Handle sampling requests
client.setRequestHandler(
{ method: "sampling/createMessage" },
async (request) => {
// The server is asking us to generate an LLM completion
const response = await anthropic.messages.create({
model: request.params.modelPreferences?.hints?.[0]?.name || "claude-sonnet-4-5-20250929",
max_tokens: request.params.maxTokens,
messages: request.params.messages,
});
return {
role: "assistant",
content: {
type: "text",
text: response.content[0].text,
},
model: response.model,
};
}
);
// Handle roots requests
client.setRequestHandler(
{ method: "roots/list" },
async () => ({
roots: [
{
uri: "file:///home/user/project",
name: "Current Project",
},
],
})
);
Best Practices for Client Development
1. Handle Connection Failures Gracefully
try {
await client.connect(transport);
} catch (error) {
console.error(`Failed to connect to ${serverName}:`, error);
// Don't crash the app—degrade gracefully
// The user can still work without this server
}
2. Implement Timeouts
const result = await Promise.race([
client.callTool({ name: "slow_tool", arguments: {} }),
new Promise((_, reject) =>
setTimeout(() => reject(new Error("Tool call timed out")), 30000)
),
]);
3. Cache Tool Lists
Don’t re-fetch the tool list before every LLM call. Cache it and only refresh when you get a list_changed notification.
4. Show Tool Calls to the User
Transparency builds trust. Show the user what tools are being called, with what arguments, and what they returned. This is both a security practice and a UX practice.
5. Validate Before Executing
Before calling a destructive tool, show the user what’s about to happen and get confirmation. The host is the trust boundary—use it.
Summary
Building an MCP client is straightforward with the official SDKs. Connect a transport, call connect(), and you have access to tools, resources, and prompts. The real work is in building the host—the application that orchestrates the LLM, manages multiple server connections, handles user interaction, and enforces security.
The key insight: MCP clients are thin. The protocol does the heavy lifting. Your job is to build a great host around those clients.
Next: a tour of every language that speaks MCP.