Prerequisites
Before you begin, ensure you have:- Python 3.10+ (3.11+ recommended)
- Docker & Docker Compose (for infrastructure)
- Git (for cloning the repository)
- An LLM API key (Google Gemini recommended for free tier)
Installation
1
Clone the Repository
2
Install uv
uv is a fast Python package manager (10-100x faster than pip):
3
Install Dependencies
No manual venv creation needed!
uv sync automatically:- Creates
.venvif it doesn’t exist - Installs all dependencies from
pyproject.toml - Uses
uv.lockfor reproducible builds
4
Start Infrastructure
This starts OpenFGA, Jaeger, Prometheus, and Grafana:Verify services are running:
5
Setup OpenFGA
Initialize the authorization system:Save the output
OPENFGA_STORE_ID and OPENFGA_MODEL_ID - you’ll need them next.6
Configure Environment
.env with your values:7
Test the Installation
Run the example client:You should see the agent responding to queries! 🎉
Verify Installation
Check that all services are accessible:Your First Request
Let’s send a message to the agent:Understanding the Response
The agent returns a structured response:The agent’s response text
Always “assistant” for agent responses
The LLM model used (supports fallback)
Token usage statistics for cost tracking
OpenTelemetry trace ID for debugging
Next Steps
Configure Authentication
Set up JWT and user management
Add Authorization
Configure fine-grained permissions with OpenFGA
Switch LLM Providers
Use Anthropic, OpenAI, or local models
Deploy to Production
Kubernetes, Helm, and production setup
Troubleshooting
Port already in use
Port already in use
If port 8080 or 8000 is already in use:
OpenFGA connection refused
OpenFGA connection refused
Ensure OpenFGA is running:
API key invalid
API key invalid
Module not found
Module not found
Ensure dependencies are installed:
Need more help? Check the Development Setup guide or ask in GitHub Discussions.
What’s Next?
Now that you have the agent running:- Explore the code - Check out
agent.pyto see how the LangGraph agent works - Try different models - Follow the Multi-LLM Setup guide
- Configure security - Set up proper authentication
- Deploy it - Follow the deployment guides
Pro Tip: Star the GitHub repository to stay updated with new features!