Skip to main content

Deploy on Kubernetes

Community-Maintained — Beta

The Agenta Helm chart is community-maintained and currently in beta. If you encounter issues or have suggestions, please open a GitHub issue or reach out in our Slack community.

This guide walks you through deploying Agenta on Kubernetes using the Helm chart. By the end, you will have a fully working Agenta instance running in your cluster.

The Helm chart packages all Agenta OSS components and uses Bitnami PostgreSQL as a subchart dependency. Database migrations run automatically as a post-install/post-upgrade hook (post-hooks are required because PostgreSQL is deployed as a Bitnami subchart and is not available until the main release installs).

What Gets Deployed

The chart creates the following workloads inside your Kubernetes namespace:

  • Web frontend (Next.js)
  • API backend (FastAPI + Gunicorn)
  • Services backend (FastAPI + Gunicorn)
  • Worker (tracing) for OTLP trace ingestion
  • Worker (evaluations) for async evaluation jobs
  • Cron for scheduled maintenance tasks
  • PostgreSQL (Bitnami subchart) with three databases
  • Redis Volatile for caching and pub/sub
  • Redis Durable for queues and persistent state
  • SuperTokens for authentication
  • Alembic migration job (post-install/post-upgrade hook)
  • Ingress resource for routing traffic to web, API, and services

Prerequisites

  • A running Kubernetes cluster (v1.24+)
  • kubectl configured to access your cluster
  • helm CLI (v3.10+) installed
  • An ingress controller installed in your cluster (Traefik or NGINX Ingress Controller)

Quick Start

1. Clone the Repository

git clone --depth 1 https://github.com/Agenta-AI/agenta && cd agenta

2. Generate Secrets

Generate the required secret values:

AG_AUTH_KEY=$(openssl rand -hex 32)
AG_CRYPT_KEY=$(openssl rand -hex 32)
PG_PASS=$(openssl rand -hex 16)
warning

Save these values in a secure secret manager. You will need them for future upgrades. Avoid using export as it exposes variables to all child processes.

3. Install the Chart

helm install agenta hosting/helm/agenta-oss \
--namespace agenta --create-namespace \
--set secrets.agentaAuthKey=$AG_AUTH_KEY \
--set secrets.agentaCryptKey=$AG_CRYPT_KEY \
--set secrets.postgresPassword=$PG_PASS \
--set postgresql.auth.password=$PG_PASS
info

secrets.postgresPassword and postgresql.auth.password must match. The first is used by the application pods; the second is used by the Bitnami PostgreSQL subchart to set the database password.

Release name and PostgreSQL secret

The chart wires the Bitnami PostgreSQL subchart to read the password from a shared secret. By default, this secret is named agenta-pgauth, which assumes you install with the release name agenta. If you use a different release name, you must override the secret name to match:

helm install myrelease hosting/helm/agenta-oss \
--set global.postgresql.auth.existingSecret=myrelease-agenta-oss-pgauth \
...

Otherwise the PostgreSQL pod will fail to find the password secret and will not start.

Security note

The --set approach is convenient for testing but exposes secrets in shell history and in helm get values output. For production, use a values.yaml file with restricted permissions or secrets.existingSecret to reference a pre-existing Kubernetes Secret. See Secrets for details.

4. Verify

# Watch pods start
kubectl -n agenta get pods -w

# Check the migration job completed
kubectl -n agenta get jobs

# Check ingress
kubectl -n agenta get ingress

Once all pods are running, access Agenta through your ingress IP or domain. If ingress is not configured with a host, use port-forwarding:

kubectl port-forward svc/agenta-agenta-oss-web 3000:3000 -n agenta

Then open http://localhost:3000 in your browser.

Using a Values File

For production deployments, create a values.yaml file instead of passing --set flags:

warning

Never commit values.yaml to version control if it contains secrets. Add it to .gitignore and restrict file permissions (chmod 600 values.yaml). For fully managed secret lifecycle, use secrets.existingSecret to reference a pre-existing Kubernetes Secret or integrate with an external secrets operator.

global:
webUrl: "https://agenta.example.com"
apiUrl: "https://agenta.example.com/api"
servicesUrl: "https://agenta.example.com/services"

secrets:
agentaAuthKey: "your-auth-key"
agentaCryptKey: "your-crypt-key"
postgresPassword: "your-db-password"

postgresql:
auth:
password: "your-db-password"

ingress:
enabled: true
className: "traefik"
host: "agenta.example.com"

Install with:

helm install agenta hosting/helm/agenta-oss \
--namespace agenta --create-namespace \
-f values.yaml

Configuration Reference

Configuration is done through Helm values. The full default values are in hosting/helm/agenta-oss/values.yaml.

Global Settings

ValuePurposeDefault
global.webUrlPublic web URLhttp://localhost
global.apiUrlPublic API URLhttp://localhost/api
global.servicesUrlPublic services URLhttp://localhost/services
global.imagePullSecretsImage pull secrets[]

Secrets

ValuePurposeDefault
secrets.existingSecretName of an existing Secret to use instead of the chart-managed one""
secrets.agentaAuthKeyAuthorization key (required)""
secrets.agentaCryptKeyEncryption key (required)""
secrets.postgresPasswordPostgreSQL password (required)""
secrets.supertokensApiKeySuperTokens API key (recommended for production)""
secrets.oauthOAuth env vars injected into pods (key = env var name){}
secrets.llmProvidersLLM provider API keys injected into pods{}

To use an existing Kubernetes Secret instead of having the chart create one, set secrets.existingSecret to the name of your Secret. It must contain keys: AGENTA_AUTH_KEY, AGENTA_CRYPT_KEY, POSTGRES_PASSWORD. This is the recommended approach for production as it keeps secrets out of Helm values entirely.

caution

When secrets.supertokensApiKey is empty, the SuperTokens instance runs without authentication. Any pod that can reach the SuperTokens service can manage auth data. Set an API key for production deployments.

Component Images

ValuePurposeDefault
api.image.repositoryAPI imageghcr.io/agenta-ai/agenta-api
api.image.tagAPI image taglatest
web.image.repositoryWeb imageghcr.io/agenta-ai/agenta-web
web.image.tagWeb image taglatest
services.image.repositoryServices imageghcr.io/agenta-ai/agenta-services
services.image.tagServices image taglatest

Workers, cron, and Alembic jobs reuse the API image.

caution

The default image tag is latest, which can pull untested versions and makes it difficult to audit what is running. For production, always pin a specific version tag (e.g., v0.86.8). See Upgrading for an example.

Component Toggles and Replicas

Each component (api, web, services, workerEvaluations, workerTracing, cron, supertokens) supports:

ValuePurposeDefault
<component>.enabledEnable/disable the componenttrue
<component>.replicasNumber of replicas1
<component>.resourcesResource requests/limits{}
<component>.nodeSelectorNode selector{}
<component>.tolerationsTolerations[]
<component>.affinityAffinity rules{}
<component>.envExtra environment variables{}

PostgreSQL (Bundled)

The chart includes Bitnami PostgreSQL as a subchart. It is enabled by default and creates three databases: agenta_oss_core, agenta_oss_tracing, and agenta_oss_supertokens.

ValuePurposeDefault
postgresql.enabledEnable bundled PostgreSQLtrue
postgresql.auth.usernameDatabase useragenta
postgresql.auth.passwordDatabase password (must match secrets.postgresPassword)""
postgresql.primary.persistence.sizePVC size10Gi

Redis

The chart deploys two Redis instances: volatile (caching/pub-sub) and durable (queues/persistent state).

ValuePurposeDefault
redisVolatile.enabledEnable volatile Redistrue
redisVolatile.maxmemoryMax memory512mb
redisVolatile.passwordPassword (recommended for production)""
redisDurable.enabledEnable durable Redistrue
redisDurable.maxmemoryMax memory512mb
redisDurable.passwordPassword (recommended for production)""
redisDurable.persistence.sizePVC size5Gi
caution

By default both Redis instances run without authentication. In shared or multi-tenant clusters, set passwords for both instances or use Kubernetes NetworkPolicies to restrict access to the Agenta namespace.

Alembic (Database Migrations)

Migrations run as a Kubernetes Job with post-install,post-upgrade hooks. Post-hooks are used because PostgreSQL is deployed as a Bitnami subchart and is not available until after the main release installs.

ValuePurposeDefault
alembic.enabledEnable migration jobtrue
alembic.activeDeadlineSecondsJob timeout600
alembic.backoffLimitRetry count3
alembic.ttlSecondsAfterFinishedCleanup delay300

Ingress Configuration

The chart creates an Ingress resource with three path rules:

  • /api routes to the API service
  • /services routes to the services backend
  • / routes to the web frontend

Ingress Values

ValuePurposeDefault
ingress.enabledEnable Ingresstrue
ingress.classNameIngress classtraefik
ingress.hostHostname""
ingress.tlsTLS configuration[]
ingress.annotationsIngress annotations{}
ingress.paths.api.pathAPI path pattern/api
ingress.paths.api.pathTypeAPI path typePrefix
ingress.paths.services.pathServices path pattern/services
ingress.paths.services.pathTypeServices path typePrefix
ingress.paths.web.pathWeb path pattern/
ingress.paths.web.pathTypeWeb path typePrefix
Ingress class name

The chart defaults to ingress.className: "traefik". If your cluster uses a different ingress controller, override this value to match. NGINX users must also override the ingress paths (see Path Prefix Stripping below).

You can check which ingress classes are available in your cluster with kubectl get ingressclass.

Path Prefix Stripping

The API and services backends expect requests without the /api or /services prefix. Your ingress controller must strip these prefixes.

Traefik: Use a StripPrefix Middleware via extraObjects:

ingress:
className: "traefik"
host: "agenta.example.com"
annotations:
traefik.ingress.kubernetes.io/router.middlewares: agenta-strip-prefixes@kubernetescrd

extraObjects:
- apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: strip-prefixes
namespace: "{{ .Release.Namespace }}"
spec:
stripPrefix:
prefixes:
- /api
- /services

NGINX Ingress Controller: Override the paths to use regex capture groups and add rewrite annotations:

ingress:
className: "nginx"
host: "agenta.example.com"
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
paths:
api:
path: /api/(.*)
pathType: ImplementationSpecific
services:
path: /services/(.*)
pathType: ImplementationSpecific
web:
path: /(.*)
pathType: ImplementationSpecific

Enabling TLS

To enable TLS, provide a TLS secret and update your global URLs to use https://:

global:
webUrl: "https://agenta.example.com"
apiUrl: "https://agenta.example.com/api"
servicesUrl: "https://agenta.example.com/services"

ingress:
host: "agenta.example.com"
tls:
- secretName: agenta-tls
hosts:
- agenta.example.com

If you use cert-manager, add the appropriate annotation:

ingress:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"

Using External Services

You can disable any bundled infrastructure component and point to an external instance instead.

External PostgreSQL

postgresql:
enabled: false
databases:
core: "agenta_oss_core"
tracing: "agenta_oss_tracing"
supertokens: "agenta_oss_supertokens"
external:
host: "your-pg-host.example.com"
port: 5432
username: "agenta"
sslmode: "require"

sslmode is appended to auto-constructed connection URIs only (as ?ssl= for asyncpg and ?sslmode= for the sync driver). It defaults to "prefer". Set it to "require" or "verify-full" for managed databases (e.g., AWS RDS, Cloud SQL). When using full URI overrides (uriCore, uriTracing, uriSupertokens), include the SSL parameter directly in the URI — sslmode is ignored in that case.

Create the three databases and grant permissions before installing:

CREATE ROLE agenta WITH LOGIN PASSWORD 'your-password';

CREATE DATABASE agenta_oss_core OWNER agenta;
CREATE DATABASE agenta_oss_tracing OWNER agenta;
CREATE DATABASE agenta_oss_supertokens OWNER agenta;

-- Grants needed for schema migrations (CREATE, ALTER) and application queries.
-- You can replace ALL with specific privileges if your security policy requires it
-- (e.g., SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER for a narrower scope).
\c agenta_oss_core
GRANT ALL ON SCHEMA public TO agenta;

\c agenta_oss_tracing
GRANT ALL ON SCHEMA public TO agenta;

\c agenta_oss_supertokens
GRANT ALL ON SCHEMA public TO agenta;

You can also provide full URI overrides:

postgresql:
enabled: false
external:
uriCore: "postgresql+asyncpg://user:pass@host:5432/agenta_oss_core"
uriTracing: "postgresql+asyncpg://user:pass@host:5432/agenta_oss_tracing"
uriSupertokens: "postgresql://user:pass@host:5432/agenta_oss_supertokens"
warning

URI overrides contain credentials inline. Prefer using secrets.existingSecret or an external secrets operator to avoid storing passwords in values.yaml.

External Redis

redisVolatile:
enabled: false
external:
uri: "redis://your-redis-host:6379/0"

redisDurable:
enabled: false
external:
uri: "redis://your-redis-host:6379/1"

External SuperTokens

supertokens:
enabled: false
external:
uri: "http://your-supertokens-host:3567"

Adding LLM Provider Keys and OAuth

Pass LLM API keys and OAuth credentials through the secrets section. These are stored in the Kubernetes Secret and injected as environment variables into the application pods.

secrets:
llmProviders:
OPENAI_API_KEY: "sk-..."
ANTHROPIC_API_KEY: "sk-ant-..."
oauth:
GOOGLE_OAUTH_CLIENT_ID: "..."
GOOGLE_OAUTH_CLIENT_SECRET: "..."

Upgrading

To upgrade to a newer version:

helm upgrade agenta hosting/helm/agenta-oss \
--namespace agenta \
-f values.yaml

The Alembic migration job runs automatically as a post-upgrade hook. Check its status:

kubectl -n agenta get jobs -l app.kubernetes.io/component=alembic
kubectl -n agenta logs job/agenta-agenta-oss-alembic

To pin to a specific version:

api:
image:
tag: "v0.86.8"
web:
image:
tag: "v0.86.8"
services:
image:
tag: "v0.86.8"

Uninstalling

helm uninstall agenta --namespace agenta
warning

This does not delete PersistentVolumeClaims. To fully remove data, delete the PVCs manually:

kubectl -n agenta delete pvc -l app.kubernetes.io/instance=agenta

Troubleshooting

Pods not starting

Check pod status and events:

kubectl -n agenta get pods
kubectl -n agenta describe pod <pod-name>

Common causes:

  • Missing secrets: ensure secrets.agentaAuthKey, secrets.agentaCryptKey, and secrets.postgresPassword are set
  • Image pull errors: verify image names and that imagePullSecrets are configured if using a private registry

Migration job fails

Check migration logs:

kubectl -n agenta logs job/agenta-agenta-oss-alembic

Common causes:

  • PostgreSQL not ready: the job includes an init container that waits for PostgreSQL, but external databases may have network issues
  • Wrong credentials: verify secrets.postgresPassword matches the database password

Ingress not working

Verify the Ingress resource:

kubectl -n agenta get ingress
kubectl -n agenta describe ingress agenta-agenta-oss

Common causes:

  • Missing ingress controller: ensure Traefik or NGINX Ingress Controller is installed
  • Missing path prefix stripping: the API and services backends will return 404 if /api and /services prefixes are not stripped (see Path Prefix Stripping)
  • Wrong ingress.className: must match your ingress controller's class name

Services can't connect to each other

Check logs for connection errors:

kubectl -n agenta logs -l app.kubernetes.io/component=api --prefix

Common causes:

  • Database URIs incorrect: check global.webUrl, global.apiUrl, and global.servicesUrl
  • Redis not ready: check Redis pod status

Getting Help

If you run into issues: