Installation
This guide walks you through installing Alien Giraffe and getting it ready to manage secure access to your data sources.
Alien Giraffe is shipped as a binary service. Its only stateful control-plane dependency is PostgreSQL, which it uses to persist platform state. As an operator, you then choose the containerized environment that Alien Giraffe will use to orchestrate data enclaves on top of, such as Docker for simpler deployments or Kubernetes for platform-managed environments.
Here are two options on how to approach installing Alien Giraffe, but these are not the only options available. For more information, discuss your use case with our Forward Deploy Engineers.
System Requirements
Section titled “System Requirements”Minimum Requirements:
- CPU: 2 cores
- RAM: 4 GB
- Storage: 20 GB
- Operating System: Linux, macOS, or Windows with WSL2
Recommended for Production:
- CPU: 4+ cores
- RAM: 8+ GB
- Storage: 50+ GB SSD
- Operating System: Linux (Ubuntu 20.04+, RHEL 8+, Debian 11+)
Dependencies:
- Docker 20.10+ (for container installation)
- Kubernetes 1.24+ (for Kubernetes deployment)
Installation Methods
Section titled “Installation Methods”Choose the installation method that best fits your environment:
Option 1: Docker (Recommended)
Section titled “Option 1: Docker (Recommended)”Best for consistent deployments and easier dependency management.
The current repository already ships a Compose-based local stack. Use that stack instead of creating a custom docker-compose.yaml.
1. Clone the repository
Section titled “1. Clone the repository”git clone https://github.com/aliengiraffe/env-manager.gitcd env-manager2. Generate local configuration
Section titled “2. Generate local configuration”task env:setupThis prepares the local .env files and scaffolds a10e.toml if it is missing.
3. Start the full local app
Section titled “3. Start the full local app”docker compose up -d --buildThis starts the repo-owned Compose stack from compose.yaml, which includes:
- PostgreSQL
- the migration job
env-manager- the dashboard
Default local URLs:
- API:
http://localhost:8080/v1 - Dashboard:
http://localhost:3000
Useful lifecycle commands:
# View logsdocker compose logs -f
# Stop the stackdocker compose down
# Stop the stack and reset the local databasedocker compose down -v4. Happy path from a clean local database
Section titled “4. Happy path from a clean local database”- Open
http://localhost:3000. - Create the first local user from the setup screen.
- In the dashboard, open
Admin -> Settings -> Catalog. - Create a datasource with:
- Name:
libenzo - Type:
s3 - Config fields:
bucket=libenzo-examplesregion=us-west-2target=processed/2025/analytics_summary.parquet
- Name:
- Run the catalog scanner from the repo root:
docker compose exec env-manager /env-manager catalog scan -config /config/a10e.toml- Return to the dashboard and refresh the catalog page. The datasource should now show the discovered dataset and can be used in explorer requests.
The Compose stack also supports optional test-only integrations through compose.test.yaml, but they are intentionally excluded from the default install flow.
Option 2: Kubernetes (Production)
Section titled “Option 2: Kubernetes (Production)”Best for production deployments requiring high availability and scalability.
For Kubernetes deployments, run PostgreSQL as a persistent workload and back it with a PersistentVolumeClaim so the control-plane database survives pod rescheduling.
Prerequisites:
- Kubernetes cluster (1.24+)
- Helm 3.8+
- kubectl configured to access your cluster
Add the Helm repository:
# Add the Alien Giraffe Helm repositoryhelm repo add alien-giraffe https://helm.aliengiraffe.ai
# Update your local Helm chart repository cachehelm repo updateCreate a values file:
Create values.yaml with your configuration:
replicaCount: 3
image: repository: aliengiraffe/a10e-manager tag: latest pullPolicy: IfNotPresent
service: type: ClusterIP port: 8080
ingress: enabled: true className: nginx hosts: - host: a10e.company.com paths: - path: / pathType: Prefix tls: - secretName: a10e-tls hosts: - a10e.company.com
resources: requests: memory: "2Gi" cpu: "1" limits: memory: "4Gi" cpu: "2"
env: - name: A10E_DB_DSN valueFrom: secretKeyRef: name: a10e-postgres-dsn key: dsn
autoscaling: enabled: true minReplicas: 3 maxReplicas: 10 targetCPUUtilizationPercentage: 70Create PostgreSQL with persistent storage before installing a10e-manager:
apiVersion: v1kind: Secretmetadata: name: a10e-postgres namespace: alien-giraffetype: OpaquestringData: POSTGRES_DB: a10e POSTGRES_USER: a10e POSTGRES_PASSWORD: change-me---apiVersion: v1kind: Servicemetadata: name: a10e-postgres namespace: alien-giraffespec: ports: - port: 5432 targetPort: 5432 selector: app: a10e-postgres---apiVersion: apps/v1kind: StatefulSetmetadata: name: a10e-postgres namespace: alien-giraffespec: serviceName: a10e-postgres replicas: 1 selector: matchLabels: app: a10e-postgres template: metadata: labels: app: a10e-postgres spec: containers: - name: postgres image: postgres:16 ports: - containerPort: 5432 envFrom: - secretRef: name: a10e-postgres volumeMounts: - name: postgres-data mountPath: /var/lib/postgresql/data volumeClaimTemplates: - metadata: name: postgres-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20GiApply the PostgreSQL manifests using a StatefulSet with persistent storage and expose it with a Service. Store the DSN for a10e-manager in a secret such as:
apiVersion: v1kind: Secretmetadata: name: a10e-postgres-dsn namespace: alien-giraffetype: OpaquestringData: dsn: postgres://a10e:change-me@a10e-postgres:5432/a10e?sslmode=disableDeploy using Helm:
# Create namespacekubectl create namespace alien-giraffe
# Apply PostgreSQL storage and database resourceskubectl apply -f postgres.yaml
# Install Alien Giraffehelm install alien-giraffe alien-giraffe/alien-giraffe \ --namespace alien-giraffe \ --values values.yaml
# Check deployment statuskubectl get pods -n alien-giraffe
# View logskubectl logs -n alien-giraffe -l app.kubernetes.io/name=alien-giraffeSee the full Kubernetes deployment guide for advanced configuration options.
Initial Configuration
Section titled “Initial Configuration”After installation, configure Alien Giraffe with your basic settings.
Configuration File Structure
Section titled “Configuration File Structure”The primary platform configuration file is a10e.toml.
a10e-manager resolves configuration in this order:
- Built-in defaults
a10e.toml- Environment variables
- CLI flags
That means a10e.toml should define the baseline configuration for the deployment, while environment variables and flags remain available for overrides.
Example:
env = "production"port = 8080
[db]dsn = "postgres://a10e:change-me@postgres:5432/a10e?sslmode=disable"max_open_conns = 25max_idle_conns = 25max_idle_time = "15m"
[cors]allowed_origins = ["https://dashboard.example.com"]
[expiry]default = "12h"max = "12h"
[deployment]driver = "docker"image_pull_policy = "Always"docker_network = "a10e-environments"
[enzo]image = "env-enzo:latest"dashboard_api_key = ""ecr_token = ""keystore_path = "/secrets/keystore.jks"keystore_password = ""
[auth]provider = "auth0"jwt_secret = ""auth0_domain = "https://example.us.auth0.com"auth0_audience = "https://a10e.example.com"auth0_bypass = false
[ai]provider = "googleai"model = "gemini-2.0-flash"api_key = ""ollama_server_addr = ""generation_timeout = "10m"
[tunnels]default_provider = "cloudflare"edge_base_url = "https://a10e.example.com/v1"cloudflare_image = "cloudflare/cloudflared:latest"ngrok_image = "ngrok/ngrok:latest"The major sections are:
- root fields such as
envandport [db]for PostgreSQL connection settings[cors]and[expiry]for API request behavior[deployment]for the container runtime model[enzo]for theenv-enzoruntime[auth]for authentication settings[ai]for AI provider configuration[tunnels]for public tunnel configuration
You can point the binary at an explicit file when needed:
a10e-manager serve -config /etc/a10e/a10e.tomla10e-manager catalog scan -config /etc/a10e/a10e.tomla10e-manager cleanup run -config /etc/a10e/a10e.tomlEnvironment Variables
Section titled “Environment Variables”Use environment variables for sensitive values or deployment-specific overrides:
# Requiredexport A10E_JWT_SECRET="your-secret-key-here"export A10E_DB_DSN="postgres://a10e:change-me@postgres:5432/a10e?sslmode=disable"
# Optionalexport AUTH_PROVIDER="auth0"export AI_PROVIDER="googleai"export A10E_CONFIG_PATH="/etc/a10e/a10e.toml"These values override the matching settings from a10e.toml.
Generate JWT Secret
Section titled “Generate JWT Secret”Create a secure JWT secret for session management:
# Generate a secure random secretopenssl rand -base64 32
# Set it as an environment variableexport A10E_JWT_SECRET="generated-secret-here"Configure The AI Provider
Section titled “Configure The AI Provider”Alien Giraffe can use either a local AI runtime or a cloud AI provider for features that depend on AI-assisted generation.
The current provider surface in a10e-manager is:
ollamaopenaianthropicgoogleai
The runtime configuration is controlled with these environment variables:
AI_PROVIDERAI_MODELAI_API_KEYAI_OLLAMA_SERVER_ADDR
AI_MODEL should always be set explicitly. AI_API_KEY is required for cloud providers. AI_OLLAMA_SERVER_ADDR is only used when AI_PROVIDER=ollama.
Local Provider With Ollama
Section titled “Local Provider With Ollama”For a local setup, run Ollama separately and point Alien Giraffe at it:
export AI_PROVIDER="ollama"export AI_MODEL="gemma3:1b"export AI_OLLAMA_SERVER_ADDR="http://host.docker.internal:11434"If Alien Giraffe is running outside Docker, use the Ollama server address that is reachable from that host, such as http://localhost:11434.
Cloud Provider With OpenAI
Section titled “Cloud Provider With OpenAI”export AI_PROVIDER="openai"export AI_MODEL="gpt-4.1"export AI_API_KEY="your-openai-api-key"Cloud Provider With Anthropic
Section titled “Cloud Provider With Anthropic”export AI_PROVIDER="anthropic"export AI_MODEL="claude-3-5-sonnet-latest"export AI_API_KEY="your-anthropic-api-key"Cloud Provider With Gemini
Section titled “Cloud Provider With Gemini”Gemini is configured through the googleai provider value:
export AI_PROVIDER="googleai"export AI_MODEL="gemini-2.5-flash"export AI_API_KEY="your-google-ai-api-key"Choose the provider that matches your deployment model. Ollama is the simplest local option. OpenAI, Anthropic, and Gemini are the cloud-backed options.
Verification
Section titled “Verification”Verify that Alien Giraffe is running correctly:
Check Service Health
Section titled “Check Service Health”# For local/Docker installationcurl http://localhost:8080/v1/healthcheck
# Expected response:# {"status":"healthy","version":"1.0.0"}Upgrade
Section titled “Upgrade”Upgrading Docker Installation
Section titled “Upgrading Docker Installation”# Pull the latest imagedocker pull aliengiraffe/a10e-manager:latest
# Stop and remove old containerdocker stop alien-giraffedocker rm alien-giraffe
# Start new containerdocker run -d \ --name alien-giraffe \ -p 8080:8080 \ -v ~/.a10e:/config \ aliengiraffe/a10e-manager:latestUpgrading Kubernetes Deployment
Section titled “Upgrading Kubernetes Deployment”# Update Helm repositoryhelm repo update
# Upgrade to latest versionhelm upgrade alien-giraffe alien-giraffe/alien-giraffe \ --namespace alien-giraffe \ --values values.yamlNext Steps
Section titled “Next Steps”Now that Alien Giraffe is installed, continue with:
- Configure Your First Data Source - Connect to PostgreSQL, S3, or other data sources
- Create Your First Policy - Define access rules for your data
For additional help:
- Components Overview - Understand core concepts
- API Reference - Explore the API