title: “MCP starters”
description: Level 2 - Build MCP servers and integrate them with AI agents and MCP-compatible clients
MCP (Model Context Protocol) is an open standard that lets AI agents connect to external tools through a unified interface. Instead of writing custom integrations for every API, you build an MCP server that exposes your APIs as tools, and any MCP-compatible agent can discover and use them automatically.This guide covers both sides of the MCP integration:
Build a production-ready MCP server using the FastMCP Python framework. The starter-mcp-server repository provides a working reference implementation you can clone and extend.
FastMCP is the recommended way to build MCP servers in Python. The official MCP Python SDK provides a low-level protocol implementation that requires manual handler registration, hand-crafted JSON Schema dictionaries, and transport boilerplate. FastMCP removes that complexity.
Minimal boilerplate
A working MCP server is 5 lines of code. Register any Python function as a tool with a single decorator.
Automatic schema generation
Type hints become JSON Schema, Python docstrings become tool descriptions, and you don’t maintain schemas manually.
OpenAPI to MCP
Auto-generate an MCP server from any OpenAPI specification, turning every REST endpoint into an MCP tool.
Multiple transports
STDIO, Streamable HTTP, and SSE with a single configuration change.
# Base URL of the upstream API that this MCP server proxiesAPI_BASE_URL=https://api.escuelajs.co# Server configurationMCP_TRANSPORT=streamable-http # "stdio" | "streamable-http"MCP_SERVER_PORT=8557MCP_SERVER_HOST=0.0.0.0MCP_STATELESS_HTTP=true # "true" = no session state between requests, "false" = stateful# Optional: VPN and web proxy settings (httpx reads these automatically)# HTTP_PROXY=http://webproxy.infra.backbase.cloud:8888# HTTPS_PROXY=http://webproxy.infra.backbase.cloud:8888# NO_PROXY=localhost,127.0.0.1
4
Install dependencies and run
uv sync# Run the serveruv run python -m src.main
The MCP server runs at http://localhost:8557/mcp.
VPN and web proxy: if your upstream API is behind a corporate network, uncomment, and configure the proxy settings in .env. For setup instructions, see the Onboarding guide on Confluence.
FastMCP can auto-generate an MCP server from any OpenAPI specification. Every endpoint becomes a tool that forwards requests to the underlying API. The starter-mcp-server uses this approach with the Platzi Fake Store API.
When your MCP server proxies an authenticated API, forward headers from the MCP client request to upstream API calls. The starter-mcp-server includes this pattern for forwarding Authorization headers:
from typing import Anyimport httpxfrom fastmcp.server.dependencies import get_http_requestclass HeaderForwardingClient(httpx.AsyncClient): """Forwards auth headers from the MCP client request to the upstream API.""" async def send(self, request: httpx.Request, **kwargs: Any) -> httpx.Response: try: incoming = get_http_request() auth = incoming.headers.get("Authorization", "") if auth: request.headers["Authorization"] = auth except Exception: pass request.headers["Content-Type"] = "application/json" return await super().send(request, **kwargs)api_client = HeaderForwardingClient(base_url="https://api.example.com")mcp = FastMCP.from_openapi( openapi_spec=spec, client=api_client, name="My Authenticated API Server",)
Header forwarding only works with HTTP transports such as streamable-http. It doesn’t apply to STDIO transport because there is no HTTP request context.
The Proxy Provider enables transport bridging, server aggregation, and gateway patterns:
Transport bridging
Multi-server proxy
from fastmcp.server import create_proxy# Bridge HTTP server to local stdiohttp_proxy = create_proxy("http://example.com/mcp/sse", name="HTTP-to-stdio")if __name__ == "__main__": http_proxy.run() # Defaults to stdio
Azure APIM exposes your existing APIs as MCP servers without custom MCP server code. You select API operations in the Azure Portal, and APIM handles MCP protocol translation, tool discovery, and invocation.This approach is useful when you already have APIs provisioned in APIM and want to make them available to AI agents without building a separate FastMCP server.
Before you can expose an API as an MCP server on APIM:
Ensure network connectivity. The platform must have network connectivity to the target APIs. If it doesn’t, raise a ticket with the Service Desk to request access.
Upload your API spec to APIM and register your API in Azure APIM using applications-live.
Known limitation: POST operations with requestBody.content sections in the OpenAPI spec cause /tools/list to hang indefinitely. Remove or simplify the requestBody.content section in your OpenAPI spec before you expose the API via MCP. See the full guide for details.
For complete click-ops instructions that cover APIM policies, security configuration, rate limiting, audit logging, external MCP server proxying, and known issues, see the APIM MCP: click-ops guide on Confluence.
Connect to an MCP server from different types of clients, including programmatic Python clients, AI-native tools like Cursor and Claude Desktop, and AI agent frameworks like Agno.
GitHub repository
View source code, releases, and issues
You need a running MCP server to connect to. See MCP server to create one.
Before connecting, determine which transport your MCP server uses:
Transport
Best for
How it works
Streamable HTTP
Production remote servers (recommended)
Client connects to an HTTP endpoint
STDIO
Local development, CLI servers
Client spawns the server as a sub-process
SSE
Legacy remote servers
HTTP with server-sent events (deprecated)
SSE transport is deprecated by MCP. Always use Streamable HTTP for deployments. Use STDIO only for local development or CLI-based MCP servers such as npx and uvx packages.