Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Chapter 6: Prompts

The Template Primitive

Prompts are MCP’s third primitive, and they’re the one most people forget exists. Which is a shame, because they’re genuinely useful.

An MCP prompt is a reusable template for LLM interactions. Think of them as pre-built conversation starters—parameterized sequences of messages that encode a particular workflow, task, or interaction pattern.

If tools are “things the model can do” and resources are “things the model can read,” prompts are “ways to talk to the model.” They’re user-controlled: the human (or application) explicitly selects a prompt to use, rather than the model discovering and invoking it.

Why Prompts Exist

Consider these scenarios:

Without prompts: A developer copies the same “Analyze this code for security vulnerabilities. Look for SQL injection, XSS, CSRF…” prompt into every conversation. They tweak it slightly each time, forget important parts occasionally, and have no way to share their refined prompt with teammates.

With prompts: The security-analysis MCP server exposes a security_audit prompt that accepts a code file as a parameter. The developer selects it, fills in the file path, and gets a consistent, thorough analysis every time. The prompt evolves on the server side, and everyone using it automatically gets improvements.

Prompts encode domain expertise into reusable templates. A database administrator builds prompts for query optimization. A DevOps engineer builds prompts for incident response. A data scientist builds prompts for exploratory data analysis. Each prompt captures best practices and can be shared through the MCP server.

Anatomy of a Prompt

{
  "name": "code_review",
  "title": "Code Review",
  "description": "Performs a thorough code review with focus on correctness, performance, and maintainability",
  "arguments": [
    {
      "name": "code",
      "description": "The code to review",
      "required": true
    },
    {
      "name": "language",
      "description": "Programming language (for language-specific checks)",
      "required": false
    },
    {
      "name": "focus",
      "description": "Specific area to focus on: security, performance, readability, or all",
      "required": false
    }
  ]
}

name (required)

Unique identifier within the server. Same naming rules as tools.

title (optional)

Human-friendly display name.

description (optional)

Explains what the prompt does and when to use it.

arguments (optional)

Parameters that customize the prompt. Each argument has:

  • name — Identifier
  • description — What this argument does
  • required — Whether the argument must be provided

Discovering Prompts

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "prompts/list",
  "params": {}
}

Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "prompts": [
      {
        "name": "code_review",
        "title": "Code Review",
        "description": "Thorough code review with configurable focus areas",
        "arguments": [
          {
            "name": "code",
            "description": "The code to review",
            "required": true
          }
        ]
      },
      {
        "name": "explain_error",
        "title": "Explain Error",
        "description": "Explains an error message and suggests fixes",
        "arguments": [
          {
            "name": "error_message",
            "description": "The error message to explain",
            "required": true
          },
          {
            "name": "context",
            "description": "Additional context about what you were doing",
            "required": false
          }
        ]
      }
    ]
  }
}

Getting a Prompt

When the user selects a prompt, the client fetches it with arguments filled in:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "prompts/get",
  "params": {
    "name": "code_review",
    "arguments": {
      "code": "function add(a, b) { return a + b; }",
      "language": "javascript",
      "focus": "all"
    }
  }
}

The server returns a sequence of messages ready to be sent to the LLM:

{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "description": "Code review for JavaScript code",
    "messages": [
      {
        "role": "user",
        "content": {
          "type": "text",
          "text": "Please perform a thorough code review of the following JavaScript code. Analyze it for correctness, performance, readability, and security.\n\n```javascript\nfunction add(a, b) { return a + b; }\n```\n\nFor each issue found, provide:\n1. The severity (critical, warning, suggestion)\n2. The line/section affected\n3. A description of the issue\n4. A suggested fix\n\nAlso note any positive aspects of the code."
        }
      }
    ]
  }
}

The returned messages are ready to be inserted into the conversation. The host typically sends them directly to the LLM.

Multi-Message Prompts

Prompts can return multiple messages, including assistant messages. This is useful for few-shot prompting or setting up a conversation pattern:

{
  "messages": [
    {
      "role": "user",
      "content": {
        "type": "text",
        "text": "You are a SQL query optimizer. I'll give you a query and you'll suggest improvements."
      }
    },
    {
      "role": "assistant",
      "content": {
        "type": "text",
        "text": "I'll analyze your SQL queries for performance issues. I'll look at:\n1. Missing indexes\n2. Unnecessary full table scans\n3. N+1 query patterns\n4. Opportunities for query simplification\n\nPlease share your query."
      }
    },
    {
      "role": "user",
      "content": {
        "type": "text",
        "text": "Here's my query:\n\nSELECT * FROM users u\nJOIN orders o ON u.id = o.user_id\nWHERE o.created_at > '2024-01-01'\nORDER BY o.total DESC;"
      }
    }
  ]
}

This establishes a conversation pattern with both the system context (first user message), the expected behavior (assistant message), and the actual query to analyze (second user message).

Prompts with Embedded Resources

Prompts can include embedded resource references, pulling in MCP resources as context:

{
  "messages": [
    {
      "role": "user",
      "content": {
        "type": "resource",
        "resource": {
          "uri": "file:///project/schema.sql",
          "mimeType": "text/sql",
          "text": "CREATE TABLE users (\n  id SERIAL PRIMARY KEY,\n  name VARCHAR(100),\n  email VARCHAR(255) UNIQUE\n);\n\nCREATE TABLE orders (\n  id SERIAL PRIMARY KEY,\n  user_id INT REFERENCES users(id),\n  total DECIMAL(10,2),\n  created_at TIMESTAMP DEFAULT NOW()\n);"
        }
      }
    },
    {
      "role": "user",
      "content": {
        "type": "text",
        "text": "Given the database schema above, optimize this query:\n\nSELECT * FROM users u JOIN orders o ON u.id = o.user_id WHERE o.created_at > '2024-01-01';"
      }
    }
  ]
}

This is powerful because the prompt can dynamically pull in relevant resources—the latest schema, the current configuration, the most recent error log—and include them as context for the LLM.

Dynamic Prompts

Like tools and resources, prompts can change at runtime. When they do, the server sends:

{
  "jsonrpc": "2.0",
  "method": "notifications/prompts/list_changed"
}

This enables scenarios like:

  • Prompts that appear based on the current project type (Python prompts for Python projects)
  • Prompts that adapt to the user’s role or permissions
  • Prompts loaded from a remote repository that gets updated

How Hosts Present Prompts

The MCP spec doesn’t dictate how prompts should be presented in the UI, but common patterns include:

Slash Commands

Many hosts expose prompts as slash commands. A prompt named code_review might be invoked as /code_review in the chat interface. This is the most common pattern—it’s what Claude Desktop and VS Code do.

Command Palettes

Some hosts list prompts in a searchable command palette, like VS Code’s Ctrl+Shift+P / Cmd+Shift+P.

Context Menus

Right-clicking on code or a file might show relevant prompts in a context menu.

Quick Actions

Some hosts show frequently-used prompts as buttons or cards in the UI.

Prompts vs. System Prompts vs. Tool Descriptions

These three concepts serve different purposes and it’s worth understanding the distinctions:

System prompts are set by the host. They define the model’s overall behavior, personality, and constraints. The user usually doesn’t see them. MCP servers don’t control them.

Tool descriptions are read by the model to decide when and how to use tools. They’re part of the tool definition. They influence model behavior during tool selection.

MCP prompts are selected by the user and injected into the conversation. They’re templates for specific tasks. They influence the conversation by providing structured context and instructions.

Think of it this way: system prompts set the stage, tool descriptions are in the program notes, and MCP prompts are audience requests.

Practical Prompt Patterns

The Expert Template

Establish the model as a domain expert:

You are a [domain] expert with deep experience in [specifics].
Given [context], perform [task] following these guidelines:
1. [Guideline]
2. [Guideline]
3. [Guideline]
Format your response as [format].

The Analyzer

Systematic analysis of provided data:

Analyze the following [type] for:
- [Dimension 1]
- [Dimension 2]
- [Dimension 3]

[Data to analyze]

For each finding, provide severity, description, and recommendation.

The Converter

Transform data from one format to another:

Convert the following [input format] to [output format].
Preserve all information. Handle edge cases like [cases].

[Input data]

The Scaffolder

Generate boilerplate for a new component:

Generate a [thing] with the following specifications:
- Name: [name]
- Properties: [properties]
- Behavior: [behavior]

Follow the project's existing patterns (see the attached examples).

Best Practices

1. Make Prompts Discoverable

Use clear names and descriptions. Users browse prompts to find what they need—make it easy.

2. Parameterize Generously

The more parameters a prompt accepts, the more flexible it is. But don’t go overboard—too many parameters and the prompt becomes harder to use than typing from scratch.

3. Include Examples in Descriptions

Show users what the prompt does with a concrete example in the description.

4. Use Embedded Resources

If your prompt needs context from a file, database, or API, embed it as a resource rather than asking the user to paste it in.

5. Test Your Prompts

A prompt is only as good as the results it produces. Test with different arguments, different models, and different edge cases.

6. Version Your Prompts

When you improve a prompt, consider the impact on users who are used to the old behavior. Major changes deserve new prompt names; minor improvements can update the existing prompt.

Summary

Prompts are MCP’s template primitive—reusable, parameterized conversation starters that encode domain expertise. They’re user-controlled, support multiple messages, can embed resources, and change at runtime.

While tools get the headlines and resources do the quiet work, prompts are the glue that makes complex workflows accessible. They turn “I need to remember my 15-step security audit prompt” into “I select the security audit prompt and fill in the file path.”

Now that we’ve covered all three primitives, let’s look at how they travel between client and server.