Overview
The Interactive Playground provides a real-time testing environment for exploring AI agents built with MCP Server and LangGraph. It offers:- Real-time streaming - See agent responses as they’re generated
- Session persistence - Maintain conversation context across interactions
- In-context observability - View traces and metrics alongside chat
- Tool visualization - Watch tool calls and responses in real-time
Architecture
Quick Start
Prerequisites
- Docker and Docker Compose
- Running MCP Server instance
- Valid authentication credentials
Starting the Playground
- Docker Compose
- Standalone
Your First Chat Session
Create a Session
Click “New Session” to create a conversation session. Sessions persist your conversation history.
Features
Real-Time Streaming
The playground uses WebSocket connections for real-time streaming of agent responses. This provides:- Immediate feedback - See each token as it’s generated
- Tool call visualization - Watch tool invocations in real-time
- Cancellation support - Stop long-running responses mid-stream
Session Management
Sessions provide persistent conversation context stored in Redis:Session Lifecycle
Session Lifecycle
Create → Active → Idle → Expired
- Sessions are created on first message
- Active sessions have a sliding TTL (default: 1 hour)
- Idle sessions expire after inactivity
- Expired sessions are cleaned up automatically
Session Storage
Session Storage
Sessions are stored in Redis with the following structure:
Multi-User Support
Multi-User Support
Sessions are scoped to authenticated users:
- Each user sees only their own sessions
- Session IDs include user-specific prefixes
- Admin users can view all sessions (with proper authorization)
In-Context Observability
View traces and metrics directly in the playground interface:| Feature | Description |
|---|---|
| Traces | OpenTelemetry distributed traces for each conversation turn |
| Logs | Structured JSON logs filtered by session |
| Metrics | Token usage, latency, and error rates |
| LangSmith | Link to LangSmith trace view (if configured) |
Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
MCP_SERVER_URL | URL of the MCP server | http://localhost:8000 |
REDIS_URL | Redis connection URL | redis://localhost:6379/2 |
SESSION_TTL_SECONDS | Session expiration time | 3600 (1 hour) |
MAX_MESSAGES_PER_SESSION | Maximum messages per session | 100 |
OTEL_EXPORTER_OTLP_ENDPOINT | OpenTelemetry endpoint | - |
LANGSMITH_API_KEY | LangSmith API key | - |
Session Configuration
Security
Authentication
The playground requires JWT authentication for all API endpoints except health checks:Authorization
Session access is controlled by OpenFGA:API Endpoints
Sessions API
Create, list, and delete chat sessions
Chat API
Send messages and receive streaming responses
Observability API
Retrieve traces, logs, and metrics
WebSocket API
Real-time bidirectional communication
Troubleshooting
WebSocket connection fails
WebSocket connection fails
Symptoms: Unable to establish WebSocket connectionSolutions:
- Check that the playground server is running
- Verify the WebSocket URL uses
ws://orwss:// - Ensure authentication token is valid
- Check for proxy/firewall issues blocking WebSocket
Session not persisting
Session not persisting
Symptoms: Messages disappear after refreshSolutions:
- Verify Redis is running and accessible
- Check session TTL configuration
- Ensure you’re using the same session ID
- Check Redis connection in logs
Traces not appearing
Traces not appearing
Symptoms: Observability panel shows no dataSolutions:
- Verify OpenTelemetry is configured
- Check OTEL exporter endpoint is reachable
- Ensure trace sampling is enabled
- Check for trace filtering by session ID
Related Documentation
Visual Workflow Builder
Design agents visually
MCP Protocol
Understand MCP messaging
Authentication
Configure authentication
Observability
Set up monitoring
Ready to test! Start the playground and begin exploring your AI agents in real-time.