Module 5: Developer's Guide to Model Context Protocol (MCP)

As LLMs became more powerful, developers wanted to connect them to external tools and data. Early attempts relied on custom, one-off solutions, which were difficult to maintain, inconsistent, and often insecure.

This module introduces developers to the Model Context Protocol (MCP), a standardized protocol developed by Anthropic that enables Large Language Models (LLMs) to interact with external tools, APIs, and data sources in a structured and secure way.

Hands-On Lab:
Launch the companion lab notebook to practice building and testing MCP client-server integrations. In the lab, you'll transform direct tool calls into protocol-based interactions, experience the three-layer MCP architecture, and see how modular, secure AI integrations work in practice.

What You'll Learn

  • Architecture: The roles and responsibilities of the host, client, and server in an MCP system, ensuring modularity, security, and clear separation of concerns.
  • Core Message Types: Standardized JSON-RPC message types—requests, responses, notifications, and errors—that enable structured, reliable, and extensible communication between MCP components.
  • Features: Outlines the core capabilities MCP enables—such as resources, tools, prompts, and sampling—allowing clients and servers to declare, negotiate, and use powerful, composable functions.
  • Connection Lifecycle: How MCP sessions are initialized, maintained, and terminated, including capability negotiation and supported transport protocols for robust, stateful connections.
  • Transport Protocols: Supported communication protocols (stdio, HTTP), session management, and authorization.
  • Security Principles: Best practices and requirements for user consent, access control, and safe tool use, ensuring secure and trustworthy MCP integrations.

You will learn how these elements together make MCP a robust, extensible, and secure foundation for advanced AI integrations.

Architecture

MCP System Overview

Quick Summary: MCP uses a modular, client-server architecture with strict security boundaries and clear separation of concerns.

High-Level Architecture: Host, Client, and Server

MCP Component Architecture Diagram
Figure: The Host manages multiple clients, each connecting to a specific server. Each client-server pair is isolated, ensuring security and modularity.
Role Main Responsibility Example
Host Manages user input, LLM interactions, security, user consent, and connections
IDE Application: Receives a user's code question, uses the LLM to interpret it, decides to call the code search tool, and displays the answer in the UI.
Client Protocol handler, connects to one server, enforces boundaries Code Search Connector: Receives a search request from the host, sends it to the code search server, and returns the results.
Server Operates independently—cannot access the full conversation or other servers. Processes requests only from its assigned client and provides access to resources, tools and prompts. Code Search Server: Indexes project files and responds to search queries from the client.

MCP Design Principles

🛠️ Servers should be extremely easy to build: Host applications handle orchestration, so servers focus on simple, well-defined capabilities.
🧩 Servers should be highly composable: Each server works in isolation but can be seamlessly combined with others via the shared protocol.
🔒 Servers should not be able to read the whole conversation or "see into" other servers: Servers only receive necessary context; each connection is isolated and controlled by the host.
⬆️ Features can be added to servers and clients progressively: The protocol allows new capabilities to be negotiated and added as needed, supporting independent evolution.

How It Works (At a Glance): Practical Example

Scenario: A user interacts with an AI-powered productivity assistant (the host) that integrates both a calendar tool and a weather tool.

  1. User: Asks, "Do I have any meetings this afternoon, and what's the weather forecast for that time?"
  2. Host: Uses the LLM to interpret the request and determines it needs to:
    • Check the user's calendar for meetings this afternoon (calendar tool)
    • Get the weather forecast for the meeting time (weather tool)
  3. Host: Uses Client 1 to connect to Server 1 (Calendar Tool), which returns: "You have a meeting at 3:00 PM."
  4. Host: Uses Client 2 to connect to Server 2 (Weather Tool), which returns: "The forecast at 3:00 PM is sunny, 75°F."
  5. Security Boundary: Each client only communicates with its assigned server. The calendar server never sees weather data, and vice versa.
  6. Host: Aggregates the results and presents: "You have a meeting at 3:00 PM. The weather at that time is expected to be sunny, 75°F."

Key Point: This example shows how MCP enables secure, modular, and orchestrated multi-tool workflows.

Note: MCP provides official SDKs for multiple languages, including Python and TypeScript, to simplify development and integration. See official documentation for more details.

Core Message Types

MCP uses JSON-RPC 2.0 as its wire format — an industry-standard protocol for structured remote procedure calls. You don't need to memorize the spec, but understanding the four message types tells you how any MCP interaction is structured under the hood.

TypeDirectionHas ID?What it doesExample
Request Client → Server or Server → Client Yes Ask another component to do something; expects a Response or Error back Call a tool, read a resource, request a completion
Response Opposite of Request Yes (matches Request) Return the result of a Request. Contains either result or error — never both Tool result, resource content
Error Opposite of Request Yes (matches Request) Return a failure for a Request. Includes an error code and human-readable message Method not found, invalid params, internal server error
Notification Either direction No (intentionally omitted) One-way event — fire and forget, no response expected Resource updated, request cancelled, progress update

What a Request/Response Looks Like

JSON-RPC — Request and Response
// Request: client asks server to read a resource
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "resources/read",
  "params": { "id": "abc123" }
}

// Response: server returns the result
{
  "jsonrpc": "2.0",
  "id": 1,                          // matches the request id
  "result": { "name": "policy.pdf", "data": "..." }
}

// Notification: server tells client a resource changed (no id, no response expected)
{
  "jsonrpc": "2.0",
  "method": "notifications/resources/updated",
  "params": { "resourceId": "abc123" }
}
Why JSON-RPC 2.0? It's a lightweight, well-understood standard that handles request/response correlation (via IDs), error codes, and one-way notifications in a consistent way across languages. Using an existing standard means MCP doesn't have to invent its own messaging semantics — and any JSON-RPC library works as a starting point.

Features

MCP defines a set of features—such as resources, tools, prompts, sampling, and roots—that enable applications to interact with external data, perform actions, and extend AI capabilities in a standardized, secure manner.

Server Features

📦 Resources: Structured data or context a server provides.
How it works: Exposes data (files, DB tables, API results) as resources for the client/LLM.
Example: List of files in a project; customer records from a database.

🛠️ Tools: Functions or actions the AI assistant can invoke.
How it works: Client/LLM calls tools via MCP to perform operations.
Example: Search database, format code.

📝 Prompts: Templated messages or workflows to guide LLM/user.
How it works: Standardizes tasks and interactions.
Example: Summarize a document; onboarding workflow.

Tip: Use a resource for static or subscribable data; use a tool for dynamic, parameterized queries or actions.

Client Features

🎲 Sampling: Lets the server request the client to generate a completion or response from the LLM.
How it works: Server asks the client to use its LLM for tasks like summarization or drafting.
Example: Server requests a summary of a document.

📁 Roots: Defines boundaries of accessible directories/files for the server.
How it works: Client exposes only specific directories to the server.
Example: Only the "/projects/my-app" folder is accessible to a code analysis server.

MCP Connection Lifecycle

The connection lifecycle in the Model Context Protocol (MCP) defines how a session is established, maintained, and terminated between a client and a server, with the host orchestrating the process. This lifecycle ensures robust, secure, and feature-negotiated communication for all MCP-compliant integrations.

Lifecycle Phases

1. Initialization 🤝

2. Active Session 🔄

Once initialized, the session enters an active state where both sides know which features are available.

3. Termination ⏹️

Lifecycle Diagram

MCP Connection Lifecycle

The diagram above illustrates the key phases and message flows in a typical MCP session, including initialization, active session management, requests, notifications, and termination.

Key Points

Transport Protocols

Transport Protocols in MCP

HeaderUsed InPurpose
Content-TypePOSTSpecifies message format (JSON)
AcceptPOST/GETIndicates accepted response types (JSON, streaming)
Mcp-Session-IdBothIdentifies the session
This approach enables both single-response and real-time, streaming communication, making MCP suitable for a wide range of use cases.
See the MCP Transports documentation for more details.

Authorization in MCP (2025-03-26 Spec)


Session Management in MCP

For more, see Session Management in the MCP spec.

Security and Trust & Safety

Reference: MCP Specification: Security and Trust & Safety
This section is a direct, word-for-word reference from the official MCP specification for accuracy and authority.

The Model Context Protocol (MCP) enables powerful integrations between LLMs and external tools or data sources. With this power comes significant responsibility: implementers must ensure robust security, user trust, and safety at every stage of development and deployment.

Key Principles

  1. User Consent and Control
    Users must explicitly consent to all data access and operations. Users should always understand and control what data is shared and what actions are taken on their behalf. Applications should provide clear UIs for reviewing and authorizing activities.
  2. Data Privacy
    Hosts must obtain explicit user consent before exposing user data to servers. Resource data must not be transmitted elsewhere without user consent, and all user data should be protected with appropriate access controls.
  3. Tool Safety
    Tools represent arbitrary code execution and must be treated with caution. Descriptions of tool behavior should be considered untrusted unless obtained from a trusted server. Hosts must obtain explicit user consent before invoking any tool, and users should understand what each tool does before authorizing its use.
  4. LLM Sampling Controls
    Users must explicitly approve any LLM sampling requests. Users should control whether sampling occurs, the actual prompt sent, and what results the server can see. The protocol intentionally limits server visibility into prompts.

Implementation Guidelines

While MCP itself cannot enforce these principles at the protocol level, it is the responsibility of every implementer to uphold them in practice.

The MCP Ecosystem

MCP's value multiplies with the ecosystem around it. Rather than building every integration from scratch, you can pick from a growing library of pre-built servers, connect through managed gateways, or discover community tools via registries.

Pre-Built MCP Servers

Anthropic and the community maintain official reference servers for the most common integration targets. Drop one of these in and your agent immediately gains access to that capability:

ServerWhat it gives your agent
FilesystemRead, write, and search files on the local file system — with configurable directory boundaries
GitHubSearch repos, read files, create/update issues and PRs, manage branches
PostgreSQLRun read-only SQL queries against a Postgres database; schema inspection
SlackRead channel history, post messages, look up users
Brave SearchWeb search and local search via the Brave Search API
Google MapsGeocoding, place search, directions
AWS Knowledge BaseQuery Amazon Bedrock Knowledge Bases (RAG) directly via MCP

Full list and source code: github.com/modelcontextprotocol/servers

Registries — Discovering Community Servers

The community has built hundreds of additional MCP servers. Two registries to know:

Building MCP Servers with FastMCP

When you need a custom MCP server, FastMCP is the fastest way to build one. It's now part of the official MCP Python SDK (from mcp.server.fastmcp import FastMCP) and turns a full server into a handful of decorated functions — no manual JSON schema, no boilerplate lifecycle code.

ApproachCode requiredBest for
Raw MCP SDK~40 lines for a single toolFull protocol control, non-Python environments
FastMCP~5 lines for a single toolMost custom servers — Python, fast iteration
FastMCP — a weather tool server in 8 lines
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("weather-server")

@mcp.tool()
def get_weather(city: str) -> str:
    """Get current weather for a city."""
    return f"72°F, sunny in {city}"   # replace with real API call

if __name__ == "__main__":
    mcp.run()   # stdio by default; mcp.run(transport="sse") for HTTP

FastMCP infers tool schemas from Python type hints and docstrings. It supports tools, resources, and prompts, and runs over stdio (local) or HTTP (remote/cloud).

Deploying FastMCP on AWS: Run locally over stdio for development. For production, package your server as a container and deploy to AgentCore Runtime (managed hosting). Optionally register it with AgentCore Gateway to expose a single unified MCP endpoint with IAM auth and CloudTrail logging — no changes to your server code required.

Amazon Bedrock AgentCore Gateway

For enterprise deployments on AWS, Bedrock AgentCore Gateway is a managed MCP gateway that sits between your agent (the MCP client/host) and your MCP servers. Instead of each agent directly managing connections to every server, the gateway handles routing, authentication, and access control centrally.

For Amazon SDEs building agents that need to call internal tools or APIs, AgentCore Gateway is the AWS-native path to doing this securely at scale.

Standards & Governance

MCP started as an Anthropic project but is now governed as an open standard under the Linux Foundation through the Agent Architecture Initiative Forum (AAIF). AWS, Google, Microsoft, and other major cloud providers are contributors. This matters for adoption decisions: MCP is not a single-vendor protocol — it's on a governance path similar to OpenTelemetry and Kubernetes.

References

MCP Module Quiz

1. Which of the following are the three main roles in the MCP architecture?

  • A) Host, Client, Server
  • B) User, Model, Database
  • C) Agent, Tool, Resource
  • D) Application, API, Service
Answer: A) Host, Client, Server

2. True or False: In MCP, only the client can send requests to the server.

  • True
  • False
Answer: False. Both the client and server can send requests and notifications.

3. Which protocol is used by MCP for encoding messages?

  • A) XML-RPC
  • B) JSON-RPC 2.0
  • C) SOAP
  • D) REST
Answer: B) JSON-RPC 2.0

4. What is the difference between a resource and a tool in MCP?

  • A) Resources are actions, tools are data
  • B) Both are the same
  • C) Resources are data or information, tools are actions or functions
  • D) Tools are only for security
Answer: B) Resources are data or information, tools are actions or functions

5. True or False: MCP supports both stdio and HTTP as transport protocols.

  • True
  • False
Answer: True

6. Which of the following is the recommended method for authorization in MCP when using HTTP-based transports?

  • A) Basic Auth
  • B) API Key in URL
  • C) No authorization is needed — it is optional
  • D) OAuth 2.1
Answer: B) OAuth 2.1

7. True or False: Notifications in MCP are one-way messages that do not expect a response.

  • True
  • False
Answer: True

8. Which of the following best describes what Amazon Bedrock AgentCore Gateway provides?

  • A) A vector store for embedding MCP tool outputs
  • B) A way to fine-tune LLMs on MCP server logs
  • C) A managed gateway that sits between Bedrock agents and MCP servers, handling routing, IAM-based auth, and audit logging centrally
  • D) A protocol for converting REST APIs to MCP format automatically
Answer: C) Bedrock AgentCore Gateway is a managed MCP gateway — agents connect to a single endpoint rather than managing per-server connections. It provides IAM-based access control and CloudTrail logging, making it the AWS-native path for secure, enterprise-grade MCP deployments.

9. You want to give your agent the ability to search GitHub repositories and create pull requests. What is the fastest path?

  • A) Build a custom MCP server that wraps the GitHub REST API from scratch
  • B) Use the Bedrock Knowledge Bases MCP server
  • C) Query the GitHub GraphQL API directly from your agent's prompt
  • D) Use the pre-built GitHub MCP server from the official MCP servers repository
Answer: D) Anthropic and the community maintain official reference MCP servers for common integrations including GitHub, Filesystem, PostgreSQL, Slack, and more. Using a pre-built server means you get a tested, standards-compliant integration in minutes instead of building from scratch.

10. MCP was created by Anthropic. What is its current governance model?

  • A) Still fully controlled by Anthropic as a proprietary standard
  • B) Governed as an open standard under the Linux Foundation (AAIF), with AWS, Google, Microsoft, and others as contributors
  • C) Managed by the IETF as an RFC
  • D) Maintained by the OpenAI Foundation
Answer: B) MCP moved to the Linux Foundation's Agent Architecture Initiative Forum (AAIF), making it a vendor-neutral open standard — similar to the governance path taken by OpenTelemetry and Kubernetes. This means adoption decisions aren't a bet on a single vendor.