Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Chapter 15: Testing and Debugging

When Things Go Wrong (And They Will)

MCP is simple in theory. In practice, you’ll encounter servers that silently crash, tools that return garbled output, transports that refuse to connect, and mysterious errors that only happen on Tuesdays. This chapter is your survival guide.

The MCP Inspector

The MCP Inspector is the official debugging tool for MCP servers. Think of it as the browser DevTools for MCP—it connects to your server, shows available tools/resources/prompts, and lets you interact with them in a nice web UI.

Running the Inspector

npx @modelcontextprotocol/inspector

This opens a web interface (usually at http://localhost:6274) where you can:

  1. Connect to any MCP server (stdio or HTTP)
  2. See the initialization handshake
  3. Browse tools, resources, and prompts
  4. Call tools with custom arguments
  5. Read resources
  6. Execute prompts
  7. View all JSON-RPC messages in real-time

Connecting to a stdio Server

In the Inspector UI, enter:

  • Command: npx (or uvx, node, python, etc.)
  • Arguments: -y my-mcp-server
  • Environment: Any environment variables

Click “Connect” and the Inspector spawns the server and performs the initialization handshake. You’ll see the full JSON-RPC exchange in the message log.

Connecting to an HTTP Server

Enter the server URL (e.g., http://localhost:3000/mcp) and click “Connect.”

What to Look For

  • Initialization — Does the server respond correctly? Does it declare the right capabilities?
  • Tool schemas — Are parameter types correct? Are required fields marked?
  • Tool execution — Do tools return the expected format? Do errors use isError: true?
  • Response times — Are tool calls completing in reasonable time?
  • Message format — Is the JSON-RPC well-formed?

Manual Testing with the CLI

You can test stdio servers directly from the command line. This is useful for quick smoke tests and CI pipelines.

The Echo Test

Send an initialize request and check the response:

echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-11-25","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' | node dist/index.js

You should get back a JSON response with the server’s capabilities.

A Full Session

# Create a test script
cat << 'EOF' > test_session.jsonl
{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-11-25","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}
{"jsonrpc":"2.0","method":"notifications/initialized"}
{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}
{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"greet","arguments":{"name":"World"}}}
EOF

# Send it to the server
cat test_session.jsonl | node dist/index.js

Each line is a separate JSON-RPC message. The server processes them in order and writes responses to stdout.

curl for HTTP Servers

# Initialize
curl -s -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-11-25","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}'

# List tools (with session header if returned)
curl -s -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -H "Mcp-Session-Id: abc123" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'

Common Problems and Solutions

Problem: Server Doesn’t Start

Symptoms: Host reports connection failure. No output from server.

Diagnosis:

# Try running the server directly
npx -y my-mcp-server

# Check if the command exists
which npx
which uvx
which node
which python

# Check for missing dependencies
npm install  # or pip install -r requirements.txt

Common causes:

  • Command not found (wrong path, not installed)
  • Missing dependencies
  • Node.js version too old
  • Python version incompatible

Problem: Server Starts But Doesn’t Respond

Symptoms: Server process is running, but the client times out waiting for responses.

Diagnosis:

# Send a minimal message and watch stdout/stderr
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-11-25","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' | node dist/index.js 2>/tmp/server-stderr.log

# Check stderr
cat /tmp/server-stderr.log

Common causes:

  • Server is writing logs to stdout instead of stderr (the #1 cause)
  • Server is waiting for input that isn’t coming
  • Server crashed during initialization but didn’t exit
  • Buffering issues (stdout not flushed)

Problem: “Method Not Found” Errors

Symptoms: Client gets -32601 errors for valid methods.

Diagnosis: Check that the server declares the right capabilities. If the server doesn’t declare tools in its capabilities, the client shouldn’t send tools/list—but if it does, the server has no handler and returns method not found.

Fix: Ensure your server declares all capabilities it implements:

const server = new Server(
  { name: "my-server", version: "1.0.0" },
  {
    capabilities: {
      tools: {},       // ← Don't forget this!
      resources: {},   // ← Or this!
    },
  }
);

Problem: Tool Calls Return Empty Results

Symptoms: Tool executes but the result is empty or undefined.

Diagnosis: Check your tool handler’s return value. The most common mistake is forgetting to return the result in the right format.

// WRONG - returns undefined
server.tool("greet", "Greet", { name: z.string() }, async ({ name }) => {
  const greeting = `Hello, ${name}!`;
  // Forgot to return!
});

// RIGHT
server.tool("greet", "Greet", { name: z.string() }, async ({ name }) => {
  return {
    content: [{ type: "text", text: `Hello, ${name}!` }],
  };
});

Problem: Connection Drops Randomly

Symptoms: Server works for a while, then the connection dies.

Common causes:

  • Unhandled exception in the server crashes the process
  • Memory leak causes OOM kill
  • Timeout on the client side
  • For HTTP: keep-alive timeout mismatch

Fix: Add global error handling:

import sys
import traceback

@mcp.tool()
async def risky_tool(data: str) -> str:
    try:
        return await process(data)
    except Exception as e:
        # Log the full traceback to stderr
        traceback.print_exc(file=sys.stderr)
        return f"Error: {str(e)}"

Problem: Server Works in Inspector But Not in Claude Desktop

Symptoms: Everything works in the MCP Inspector, but Claude Desktop can’t use it.

Diagnosis:

  1. Check the Claude Desktop logs: ~/Library/Logs/Claude/mcp*.log
  2. Verify the config file path and JSON syntax
  3. Make sure the command path is absolute or findable in PATH
  4. Check environment variables

Common causes:

  • Claude Desktop doesn’t have the same PATH as your terminal
  • Config file JSON has a syntax error (trailing comma is NOT valid JSON)
  • Server binary was built for a different architecture
  • Environment variables aren’t being passed

Problem: Schema Validation Failures

Symptoms: LLM generates arguments that the server rejects.

Diagnosis: Check your input schema. Common issues:

  • Missing description fields (LLM doesn’t know what to put)
  • Too-loose types (using string when you need an enum)
  • Missing required array
  • Nested objects without proper schema

Fix: Make schemas as specific and descriptive as possible:

{
  "type": "object",
  "properties": {
    "action": {
      "type": "string",
      "enum": ["start", "stop", "restart"],
      "description": "The action to perform on the service"
    },
    "service_name": {
      "type": "string",
      "description": "Name of the service (e.g., 'nginx', 'postgres', 'redis')"
    }
  },
  "required": ["action", "service_name"]
}

Testing Strategies

Unit Testing Tools

Test your tool functions directly, without the MCP protocol layer:

import pytest

# Test the function, not the MCP wrapper
@pytest.mark.asyncio
async def test_get_weather_success(mock_api):
    result = await get_weather("London", "celsius")
    assert "London" in result
    assert "°C" in result

@pytest.mark.asyncio
async def test_get_weather_invalid_city(mock_api):
    result = await get_weather("NotARealCity", "celsius")
    assert "not found" in result.lower() or "error" in result.lower()

@pytest.mark.asyncio
async def test_get_weather_missing_api_key(monkeypatch):
    monkeypatch.delenv("WEATHER_API_KEY", raising=False)
    result = await get_weather("London", "celsius")
    assert "API_KEY" in result

Integration Testing

Test the full MCP protocol flow:

import asyncio
from mcp.client.session import ClientSession
from mcp.client.stdio import stdio_client, StdioServerParameters

@pytest.mark.asyncio
async def test_server_integration():
    params = StdioServerParameters(
        command="python",
        args=["server.py"],
        env={"WEATHER_API_KEY": "test-key"},
    )

    async with stdio_client(params) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # Verify initialization
            assert session.server_info.name == "weather-server"

            # Verify tools are listed
            tools = await session.list_tools()
            tool_names = [t.name for t in tools.tools]
            assert "get_weather" in tool_names

            # Verify tool execution
            result = await session.call_tool(
                "get_weather",
                {"city": "London", "units": "celsius"},
            )
            assert not result.isError
            assert len(result.content) > 0

Property-Based Testing

For tools that accept complex inputs, property-based testing can catch edge cases:

from hypothesis import given, strategies as st

@given(
    city=st.text(min_size=1, max_size=100),
    units=st.sampled_from(["celsius", "fahrenheit"]),
)
@pytest.mark.asyncio
async def test_weather_doesnt_crash(city, units):
    """Weather tool should never crash, regardless of input."""
    result = await get_weather(city, units)
    assert isinstance(result, str)
    # It might error, but it shouldn't crash

CI Pipeline Testing

# .github/workflows/test-mcp.yml
name: Test MCP Server
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - run: pip install -e ".[test]"
      - run: pytest tests/ -v

Debugging Tips

1. Enable Verbose Logging

Most SDKs support verbose logging. Enable it during development:

import logging
logging.basicConfig(level=logging.DEBUG, stream=sys.stderr)
// Set DEBUG environment variable
process.env.DEBUG = "mcp:*";

2. Log Every Message

Wrap your transport to log all JSON-RPC messages:

# In development, log all messages to stderr
import json
import sys

def log_message(direction: str, message: dict):
    print(f"{direction}: {json.dumps(message, indent=2)}", file=sys.stderr)

3. Use the Simplest Possible Test

When debugging, strip everything down to the simplest case. Don’t debug a 10-tool server—create a 1-tool server that reproduces the issue.

4. Check Both Ends

MCP problems can be in the server or the client. Check both:

  • Server stderr logs
  • Client/host logs
  • The actual JSON-RPC messages exchanged

5. Version Mismatch

If a server works with one client but not another, check protocol version compatibility. Different clients may support different spec versions.

Summary

Testing and debugging MCP servers follows familiar patterns with some protocol-specific nuances:

  • The MCP Inspector is your best friend for interactive debugging
  • Manual CLI testing works for quick smoke tests
  • Most problems are stdout/stderr confusion, missing capabilities, or config errors
  • Test at every level: unit tests for logic, integration tests for protocol, property tests for robustness
  • Log everything during development, especially the JSON-RPC messages

The key debugging mindset: MCP is just JSON over a transport. When in doubt, look at the actual JSON being exchanged. The protocol is transparent by design—if you can see the messages, you can diagnose the problem.

Next: taking it to production.