Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mcp-server-langgraph.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Deploy the MCP Server with LangGraph using Helm for simplified Kubernetes deployment management. The Helm chart includes all dependencies (Keycloak, Redis, OpenFGA, PostgreSQL) with production-ready defaults.
v2.8.0 Helm chart includes Keycloak SSO, Redis sessions, OpenFGA authorization, and comprehensive observability.

Quick Start

## Add Helm repository (future - when published)
## helm repo add langgraph https://your-org.github.io/helm-charts
## helm repo update

## For now, use local chart
cd deployments/helm

## Install with default values
helm install mcp-server-langgraph ./mcp-server-langgraph \
  --namespace mcp-server-langgraph \
  --create-namespace \
  --set image.repository=gcr.io/your-project/mcp-server-langgraph \
  --set image.tag=v2.8.0 \
  --set secrets.anthropicApiKey="${ANTHROPIC_API_KEY}"

## Check deployment status
helm status mcp-server-langgraph -n mcp-server-langgraph

## Get service URL
kubectl get ingress -n mcp-server-langgraph

Prerequisites

1

Install Helm

# Install Helm 3
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Verify installation
helm version
2

Kubernetes Cluster

  • Kubernetes 1.25+
  • kubectl configured
  • 4+ vCPUs, 8GB+ RAM
  • 100GB+ storage
3

Container Image

Build and push your container image:
docker build -t gcr.io/your-project/mcp-server-langgraph:v2.8.0 .
docker push gcr.io/your-project/mcp-server-langgraph:v2.8.0
4

Dependencies (Optional)

Add required Helm repositories for dependencies:
# Bitnami (Keycloak, Redis, PostgreSQL)
helm repo add bitnami https://charts.bitnami.com/bitnami

# OpenFGA
helm repo add openfga https://openfga.github.io/helm-charts

helm repo update

Chart Structure

deployments/helm/mcp-server-langgraph/
├── Chart.yaml              # Chart metadata
├── values.yaml             # Default configuration
├── values-production.yaml  # Production overrides
├── templates/
│   ├── deployment.yaml     # Main application
│   ├── service.yaml        # Kubernetes Service
│   ├── ingress.yaml        # Ingress configuration
│   ├── configmap.yaml      # Configuration
│   ├── secret.yaml         # Secrets
│   ├── serviceaccount.yaml # Service account
│   ├── hpa.yaml            # Autoscaling
│   ├── pdb.yaml            # Pod disruption budget
│   └── _helpers.tpl        # Template helpers
└── README.md

Installation

Basic Installation

helm install mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
  --namespace mcp-server-langgraph \
  --create-namespace \
  --set image.repository=gcr.io/your-project/mcp-server-langgraph \
  --set image.tag=v2.8.0 \
  --set secrets.anthropicApiKey="${ANTHROPIC_API_KEY}" \
  --set secrets.googleApiKey="${GOOGLE_API_KEY}" \
  --set ingress.enabled=true \
  --set ingress.hosts[0].host=api.yourdomain.com

With Custom Values File

# values-production.yaml
replicaCount: 5

image:
  repository: gcr.io/your-project/mcp-server-langgraph
  tag: "v2.8.0"
  pullPolicy: IfNotPresent

# Ingress
ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
  hosts:
    - host: api.yourdomain.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: mcp-server-langgraph-tls
      hosts:
        - api.yourdomain.com

# Autoscaling
autoscaling:
  enabled: true
  minReplicas: 5
  maxReplicas: 20
  targetCPUUtilizationPercentage: 70

# Resources
resources:
  requests:
    cpu: 1000m
    memory: 1Gi
  limits:
    cpu: 4000m
    memory: 4Gi

# Application config
config:
  llmProvider: "anthropic"
  modelName: "claude-sonnet-4-5-20250929"
  authProvider: "keycloak"
  authMode: "session"
  enableTracing: true
  enableMetrics: true

# Keycloak SSO
keycloak:
  enabled: true
  replicaCount: 2
  postgresql:
    enabled: true
  ingress:
    enabled: true
    hostname: sso.yourdomain.com

# Redis Sessions
redis:
  enabled: true
  architecture: replication
  master:
    persistence:
      enabled: true
      size: 20Gi
  replica:
    replicaCount: 2
    persistence:
      enabled: true

# OpenFGA
openfga:
  enabled: true
  replicaCount: 2

# PostgreSQL (for Keycloak & OpenFGA)
postgresql:
  enabled: true
  primary:
    persistence:
      enabled: true
      size: 50Gi

Cloud-Specific Installations

# values-gke.yaml
image:
  repository: gcr.io/your-project/mcp-server-langgraph

serviceAccount:
  create: true
  annotations:
    iam.gke.io/gcp-service-account: mcp-server-langgraph@your-project.iam.gserviceaccount.com

ingress:
  enabled: true
  className: "gce"
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "mcp-server-langgraph-ip"
    networking.gke.io/managed-certificates: "mcp-server-langgraph-cert"

keycloak:
  ingress:
    className: "gce"

postgresql:
  primary:
    persistence:
      storageClass: "standard-rwo"

redis:
  master:
    persistence:
      storageClass: "standard-rwo"
# Create static IP
gcloud compute addresses create mcp-server-langgraph-ip --global

# Install chart
helm install mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
  -n mcp-server-langgraph --create-namespace \
  -f values-gke.yaml

Configuration

Application Configuration

## In values.yaml
config:
  # Service
  environment: "production"
  logLevel: "INFO"

  # LLM Provider
  llmProvider: "anthropic"  # google, anthropic, openai, azure
  modelName: "claude-sonnet-4-5-20250929"
  modelTemperature: "0.7"
  modelMaxTokens: "4096"
  enableFallback: true

  # Agent
  maxIterations: 10
  enableCheckpointing: true

  # Authentication
  authProvider: "keycloak"  # inmemory, keycloak
  authMode: "session"       # token, session

  # Keycloak
  keycloakServerUrl: "http://keycloak:8080"
  keycloakRealm: "mcp-server-langgraph"
  keycloakClientId: "langgraph-client"
  keycloakVerifySsl: true

  # Sessions
  sessionBackend: "redis"
  redisUrl: "redis://redis-session:6379/0"
  sessionTtlSeconds: 86400
  sessionSlidingWindow: true
  sessionMaxConcurrent: 5

  # OpenFGA
  openfgaApiUrl: "http://openfga:8080"

  # Observability
  enableTracing: true
  enableMetrics: true
  observabilityBackend: "opentelemetry"
  otlpEndpoint: "http://otel-collector:4317"

Secrets Configuration

Never commit secrets to Git! Use --set flags, external secret managers, or Kubernetes secrets.
## In values.yaml (DO NOT commit actual values!)
secrets:
  # LLM API Keys
  anthropicApiKey: ""
  googleApiKey: ""
  openaiApiKey: ""

  # Authentication
  jwtSecretKey: ""
  keycloakClientSecret: ""

  # Session Store
  redisPassword: ""

  # OpenFGA
  openfgaStoreId: ""
  openfgaModelId: ""

  # Observability (optional)
  langsmithApiKey: ""

  # Secrets Management (optional)
  infisicalClientId: ""
  infisicalClientSecret: ""
  infisicalProjectId: ""
Set secrets via command line:
helm install mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
  --set secrets.anthropicApiKey="${ANTHROPIC_API_KEY}" \
  --set secrets.googleApiKey="${GOOGLE_API_KEY}" \
  --set secrets.jwtSecretKey="$(openssl rand -base64 32)" \
  --set secrets.keycloakClientSecret="${KEYCLOAK_CLIENT_SECRET}" \
  --set secrets.redisPassword="$(openssl rand -base64 32)"

Dependencies Configuration

    keycloak:
      enabled: true
      replicaCount: 2

      auth:
        adminUser: admin
        adminPassword: ""  # Set via --set

      postgresql:
        enabled: true
        auth:
          password: ""  # Auto-generated

      ingress:
        enabled: true
        hostname: sso.yourdomain.com
        ingressClassName: nginx
        tls: true
        annotations:
          cert-manager.io/cluster-issuer: letsencrypt-prod

      resources:
        requests:
          cpu: 500m
          memory: 1Gi
        limits:
          cpu: 2000m
          memory: 2Gi

Upgrading

Minor Version Upgrade

## Update values if needed
vim values-production.yaml

## Upgrade release
helm upgrade mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
  --namespace mcp-server-langgraph \
  --values values-production.yaml \
  --set image.tag=v2.8.0

## Check rollout
kubectl rollout status deployment/mcp-server-langgraph -n mcp-server-langgraph

Major Version Upgrade

## Check for breaking changes
helm show notes ./deployments/helm/mcp-server-langgraph

## Backup current values
helm get values mcp-server-langgraph -n mcp-server-langgraph > current-values.yaml

## Perform upgrade
helm upgrade mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
  --namespace mcp-server-langgraph \
  --values current-values.yaml \
  --set image.tag=v3.0.0 \
  --reuse-values

## Verify
helm list -n mcp-server-langgraph
kubectl get pods -n mcp-server-langgraph

Rollback

## List release history
helm history mcp-server-langgraph -n mcp-server-langgraph

## Rollback to previous version
helm rollback mcp-server-langgraph -n mcp-server-langgraph

## Or rollback to specific revision
helm rollback mcp-server-langgraph 3 -n mcp-server-langgraph

Uninstallation

## Uninstall release
helm uninstall mcp-server-langgraph -n mcp-server-langgraph

## Delete namespace (if desired)
kubectl delete namespace mcp-server-langgraph

## Note: PVCs are not deleted by default
## List PVCs
kubectl get pvc -n mcp-server-langgraph

## Delete PVCs (WARNING: Deletes data!)
kubectl delete pvc --all -n mcp-server-langgraph

Customization

Custom Init Containers

## values.yaml
initContainers:
  - name: wait-for-database
    image: busybox:1.36
    command: ['sh', '-c']
    args:
      - |
        until nc -z postgresql 5432; do
          echo "Waiting for PostgreSQL..."
          sleep 2
        done

Custom Sidecars

## values.yaml
sidecars:
  - name: cloud-sql-proxy
    image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.8.0
    args:
      - "--structured-logs"
      - "--port=5432"
      - "your-project:us-central1:your-instance"
    securityContext:
      runAsNonRoot: true
    resources:
      requests:
        memory: "256Mi"
        cpu: "100m"

Custom Volumes

## values.yaml
extraVolumes:
  - name: model-cache
    persistentVolumeClaim:
      claimName: model-cache-pvc

extraVolumeMounts:
  - name: model-cache
    mountPath: /app/.cache/models

Environment Variables

## values.yaml
extraEnv:
  - name: CUSTOM_VAR
    value: "custom-value"
  - name: SECRET_VAR
    valueFrom:
      secretKeyRef:
        name: external-secret
        key: secret-key

Monitoring

Prometheus Integration

## values.yaml
metrics:
  enabled: true
  serviceMonitor:
    enabled: true
    interval: 30s
    labels:
      release: prometheus

podAnnotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "8000"
  prometheus.io/path: "/metrics/prometheus"

Grafana Dashboards

## Import dashboards from configmap
kubectl create configmap langgraph-dashboards \
  --from-file=dashboards/ \
  --namespace=observability

## Label for Grafana sidecar
kubectl label configmap langgraph-dashboards \
  grafana_dashboard=1 \
  --namespace=observability

Troubleshooting

    # Check for syntax errors
    helm lint ./deployments/helm/mcp-server-langgraph

    # Dry-run to see rendered manifests
    helm install mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
      --dry-run --debug \
      --namespace mcp-server-langgraph

    # Check dependencies
    helm dependency update ./deployments/helm/mcp-server-langgraph
    # Update dependencies
    cd deployments/helm/mcp-server-langgraph
    helm dependency update

    # Check Chart.yaml dependencies
    cat Chart.yaml

    # Install with dependency conditions
    helm install mcp-server-langgraph . \
      --set keycloak.enabled=true \
      --set redis.enabled=true \
      --set openfga.enabled=true
    # Check current values
    helm get values mcp-server-langgraph -n mcp-server-langgraph

    # Check all values (including defaults)
    helm get values mcp-server-langgraph -n mcp-server-langgraph --all

    # Verify rendered templates
    helm template mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
      --values values-production.yaml \
      --debug
    # Check what will change
    helm diff upgrade mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
      --values values-production.yaml

    # Force upgrade
    helm upgrade mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
      --force \
      --cleanup-on-fail

    # If stuck, delete and reinstall
    helm uninstall mcp-server-langgraph -n mcp-server-langgraph
    helm install mcp-server-langgraph ./deployments/helm/mcp-server-langgraph \
      --values values-production.yaml

Best Practices

  • Commit values files to Git (without secrets)
  • Tag releases with Helm chart version
  • Use semantic versioning for Chart.yaml
  • Document breaking changes in Chart notes
  • Never commit secrets to values files
  • Use external secret operators (External Secrets Operator, Sealed Secrets)
  • Rotate secrets regularly
  • Use cloud secret managers (GCP Secret Manager, AWS Secrets Manager)
  • Test with helm lint before deployment
  • Use —dry-run to preview changes
  • Deploy to staging first
  • Validate with smoke tests after deployment
  • Enable autoscaling for production
  • Set resource limits appropriately
  • Enable PodDisruptionBudget for HA
  • Use persistent volumes for stateful components
  • Enable monitoring and alerting

Next Steps

Kubernetes Deployment

Manual Kubernetes deployment guide

Production Checklist

Pre-deployment verification

Scaling Guide

Auto-scaling configuration

Monitoring

Observability setup

Simplified Deployment: Helm charts make Kubernetes deployment easy with one command!