Agent Architecture and Usage
This document describes the agent architecture used in MCP Server LangGraph and provides guidance for working with LangGraph agents and Pydantic AI integration. Related Documentation: This guide covers agent architecture. For Claude Code workflow guidance, see CLAUDE.md.Table of Contents
- Overview
- LangGraph Agent
- Pydantic AI Integration
- Agent Configuration
- Tool Integration
- State Management
- Best Practices
Overview
MCP Server LangGraph implements a functional agent architecture using LangGraph for stateful conversation management and Pydantic AI for structured outputs and tool calling. Note: This guide is placed at the repository root for maximum discoverability. For project-specific Claude Code workflow patterns, see CLAUDE.md.Architecture Diagram
LangGraph Agent
Core Components
Located in:src/mcp_server_langgraph/core/agent.py
1. AgentState
2. Agent Graph
3. Conditional Routing
Stateful Conversation
LangGraph maintains conversation state through checkpointing:Tool Execution
LangGraph handles tool execution automatically:Pydantic AI Integration
Overview
Located in:src/mcp_server_langgraph/llm/pydantic_agent.py
Pydantic AI provides:
- Structured outputs: Type-safe responses with Pydantic models
- Model abstraction: Unified interface across LLM providers
- Tool integration: Function calling with validation
- Streaming support: Token-by-token streaming
Agent Creation
Tool Definition
Model Switching
Pydantic AI supports dynamic model switching:Structured Output Examples
Agent Configuration
LLM Selection
From:src/mcp_server_langgraph/llm/factory.py
Environment Variables
Multi-Model Fallback
Tool Integration
MCP Tools
MCP (Model Context Protocol) tools are exposed via the MCP server:Custom Tools
Add custom tools to the agent:Tool Authorization
Tools respect OpenFGA permissions:State Management
Conversation Memory
LangGraph manages conversation history:Persistent State
For production, use PostgreSQL checkpointing:State Schema
Define custom state fields:Best Practices
1. Error Handling
2. Rate Limiting
3. Streaming Responses
4. Tool Validation
5. Observability
6. Testing Agents
Performance Considerations
1. Token Usage
Monitor token usage to optimize costs:2. Caching
Use caching for repeated queries:3. Parallel Tool Execution
Execute independent tools in parallel:Resources
- LangGraph Documentation: https://langchain-ai.github.io/langgraph/
- Pydantic AI Documentation: https://ai.pydantic.dev/
- LiteLLM Documentation: https://docs.litellm.ai/
- Project Documentation: See
docs/directory - Comprehensive documentation with 100% coverage - Project Guides:
Last Updated: 2025-10-14 LangGraph Version: 0.6.10 (upgraded from 0.2.28) Pydantic AI Version: 0.0.15