Work in progress
A Juju machine charm for deploying Concourse CI - a modern, scalable continuous integration and delivery system. This charm supports flexible deployment patterns including single-unit, multi-unit with automatic role assignment, and separate web/worker configurations.
Note: This is a machine charm designed for bare metal, VMs, and LXD deployments. For Kubernetes deployments, see https://charmhub.io/concourse-web and https://charmhub.io/concourse-worker.
/srv (General Mounting Guide)
/srv and go_writable or _rw suffix# Create a Juju model
juju add-model concourse
# Deploy PostgreSQL
juju deploy postgresql --channel 16/stable --base ubuntu@24.04
# Deploy Concourse CI charm as application "concourse-ci"
juju deploy concourse-ci-machine concourse-ci --config mode=auto
# Relate to database (uses PostgreSQL 16 client interface with Juju secrets)
juju integrate concourse-ci:postgresql postgresql:database
# Expose the web interface (opens port in Juju)
juju expose concourse-ci
# Wait for deployment (takes ~5-10 minutes)
juju status --watch 1s
The charm automatically:
Naming Convention:
concourse-ci-machine (what you deploy from Charmhub)concourse-ci (used throughout this guide)concourse-ci/0, concourse-ci/1, etc.Once deployed, get credentials with juju run concourse-ci/leader get-admin-password
Deploy multiple units with automatic role assignment and key distribution:
# Deploy PostgreSQL
juju deploy postgresql --channel 16/stable --base ubuntu@24.04
# Deploy Concourse charm (named "concourse-ci") with 1 web + 2 workers
juju deploy concourse-ci-machine concourse-ci -n 3 --config mode=auto
# Relate to database (using application name "concourse-ci")
juju relate concourse-ci:postgresql postgresql:database
# Check deployment
juju status
Result:
concourse-ci/0 (leader): Web serverconcourse-ci/1-2: WorkersNote: Application is named concourse-ci for easier reference (shorter than concourse-ci-machine)
For maximum flexibility with separate applications:
# Deploy PostgreSQL
juju deploy postgresql --channel 16/stable --base ubuntu@24.04
# Deploy web server (1 unit)
juju deploy concourse-ci-machine web --config mode=web
# Deploy workers (2 units)
juju deploy concourse-ci-machine worker -n 2 --config mode=worker
# Relate web to database
juju relate web:postgresql postgresql:database
# Relate web and worker for automatic TSA key exchange
juju relate web:web-tsa worker:worker-tsa
# Check deployment
juju status
Result:
web/0: Web server onlyworker/0, worker/1: Workers only connected via TSANote: The web-tsa / worker-tsa relation automatically handles SSH key exchange between web and worker applications, eliminating the need for manual key management.
The charm supports three deployment modes via the mode configuration:
auto (Multi-Unit - Fully Automated β¨)Leader unit runs web server, non-leader units run workers. Keys automatically distributed via peer relations!
Note: You need at least 2 units for this mode to have functional workers (Unit 0 = Web, Unit 1+ = Workers).
juju deploy concourse-ci-machine concourse-ci -n 3 --config mode=auto
juju relate concourse-ci:postgresql postgresql:database
Best for: Production, scalable deployments Key Distribution: β Fully Automatic - zero manual intervention required!
web + worker (Separate Apps - Automatic TSA Setup)Deploy web and workers as separate applications for independent scaling.
# Web application
juju deploy concourse-ci-machine web --config mode=web
# Worker application (scalable)
juju deploy concourse-ci-machine worker -n 2 --config mode=worker
# Relate web to PostgreSQL
juju relate web:postgresql postgresql:database
# Relate web and worker for automatic TSA key exchange
juju relate web:web-tsa worker:worker-tsa
Best for: Independent scaling of web and workers
Key Distribution: β
Automatic via web-tsa / worker-tsa relation
| Option | Type | Default | Description |
|---|---|---|---|
mode |
string | auto |
Deployment mode: auto, web, or worker |
version |
string | latest |
Concourse version to install (auto-detects latest from GitHub) |
web-port |
int | 8080 |
Web UI and API port |
worker-procs |
int | 1 |
Number of worker processes per unit |
log-level |
string | info |
Log level: debug, info, warn, error |
enable-metrics |
bool | true |
Enable Prometheus metrics on port 9391 |
external-url |
string | (auto) | External URL for webhooks and OAuth |
initial-admin-username |
string | admin |
Initial admin username |
container-placement-strategy |
string | volume-locality |
Container placement: volume-locality, random, etc. |
max-concurrent-downloads |
int | 10 |
Max concurrent resource downloads |
containerd-dns-proxy-enable |
bool | false |
Enable containerd DNS proxy |
containerd-dns-server |
string | 1.1.1.1,8.8.8.8 |
DNS servers for containerd containers |
Configuration changes are applied dynamically with automatic service restart.
# Set custom web port (automatically restarts service)
juju config concourse-ci web-port=9090
# Change to privileged port 80 (requires CAP_NET_BIND_SERVICE - already configured)
juju config concourse-ci web-port=80
# Enable debug logging
juju config concourse-ci log-level=debug
# Set external URL (auto-detects unit IP if not set)
juju config concourse-ci external-url=https://ci.example.com
Use the upgrade action to change Concourse CI version (update the version configuration first to ensure the change persists across charm refreshes):
# Set version configuration first (essential for persistence)
juju config concourse-ci version=7.14.3
# Trigger the upgrade action (automatically upgrades all workers)
juju config concourse-ci version=7.14.3
# Downgrade is also supported (update config then run action)
juju config concourse-ci version=7.12.1
juju config concourse-ci version=7.12.1
Auto-upgrade behavior:
Note: The web-port configuration supports dynamic changes including privileged ports (< 1024) thanks to AmbientCapabilities=CAP_NET_BIND_SERVICE in the systemd service.
juju status
juju status concourse-ci
# Look for: Ports column showing "80/tcp" or "8080/tcp"
Open in browser: http://<web-unit-ip>:<port>
juju run concourse-ci/leader get-admin-password
Example output:
message: Use these credentials to login to Concourse web UI
password: 01JfF@I!9W^0%re!3I!hyy3C
username: admin
Security: A random password is automatically generated on first deployment and stored securely in Juju peer relation data. All units in the deployment share the same credentials.
The Fly CLI is Concourseβs command-line tool for managing pipelines:
# Download fly from your Concourse instance
curl -Lo fly "http://<web-unit-ip>:8080/api/v1/cli?arch=amd64&platform=linux"
chmod +x fly
sudo mv fly /usr/local/bin/
# Get credentials
ADMIN_PASSWORD=$(juju run concourse-ci/leader get-admin-password --format=json | jq -r '."unit-concourse-ci-2".results.password')
# Login
fly -t prod login -c http://<web-unit-ip>:8080 -u admin -p "$ADMIN_PASSWORD"
# Sync fly version
fly -t prod sync
β οΈ Important: This charm uses containerd runtime. All tasks must include an image_resource.
hello.yml:
jobs:
- name: hello-world
plan:
- task: say-hello
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
run:
path: sh
args:
- -c
- |
echo "=============================="
echo "Hello from Concourse CI!"
echo "Date: $(date)"
echo "=============================="
fly -t prod set-pipeline -p hello -c hello.yml
fly -t prod unpause-pipeline -p hello
fly -t prod trigger-job -j hello/hello-world -w
Note: Common lightweight images: busybox (~2MB), alpine (~5MB), ubuntu (~28MB)
# Add 2 more worker units to the concourse-ci application
juju add-unit concourse-ci -n 2
# Verify workers
juju ssh concourse-ci/0 # SSH to unit 0 of concourse-ci application
fly -t local workers
# Remove specific unit
juju remove-unit concourse-ci/3
The web server requires a PostgreSQL database:
juju relate concourse-ci:postgresql postgresql:database
Supported PostgreSQL Charms:
postgresql (16/stable recommended)postgresql interfaceConcourse exposes Prometheus metrics on port 9391:
juju relate concourse-ci:monitoring prometheus:target
Units automatically coordinate via the concourse-peer relation (automatic, no action needed).
The charm uses Juju storage for persistent data:
# Deploy with specific storage
juju deploy concourse-ci-machine concourse-ci --storage concourse-data=20G
# Add storage to existing unit
juju add-storage concourse-ci/0 concourse-data=10G
Storage is mounted at /var/lib/concourse.
Concourse workers can utilize NVIDIA GPUs for ML/AI workloads, GPU-accelerated builds, and compute-intensive tasks.
Note: The charm automatically installs nvidia-container-toolkit and configures the GPU runtime. No manual setup required!
Complete deployment from scratch:
# 1. Deploy PostgreSQL
juju deploy postgresql --channel 16/stable --base ubuntu@24.04
# 2. Deploy web server
juju deploy concourse-ci-machine web --config mode=web
# 3. Deploy GPU-enabled worker
juju deploy concourse-ci-machine worker \
--config mode=worker \
--config enable-gpu=true
# 4. Add GPU to LXD container (only manual step for localhost cloud)
lxc config device add <container-name> gpu0 gpu
# Example: lxc config device add juju-abc123-0 gpu0 gpu
# 5. Create relations
juju relate web:postgresql postgresql:database
juju relate web:web-tsa worker:worker-tsa
# 6. Check status
juju status worker
# Expected: "Worker ready (GPU: 1x NVIDIA)"
# Enable GPU on already deployed worker
juju config worker enable-gpu=true
If deploying on LXD (localhost cloud), add GPU to the container:
# Find your worker container name
lxc list | grep juju
# Add GPU device (requires container restart)
lxc config device add <container-name> gpu0 gpu
# Example:
lxc config device add juju-abc123-0 gpu0 gpu
Everything else is automated! The charm will:
| Option | Default | Description |
|---|---|---|
enable-gpu |
false |
Enable GPU support for this worker |
gpu-device-ids |
all |
GPU devices to expose: βallβ or β0,1,2β |
When GPU is enabled, workers are automatically tagged:
gpu - Worker has GPUgpu-type=nvidia - GPU vendor typegpu-count=N - Number of GPUs availablegpu-devices=0,1 - Specific device IDs (if configured)Create a pipeline that targets GPU-enabled workers:
jobs:
- name: train-model
plan:
- task: gpu-training
tags: [gpu] # Target GPU-enabled workers
config:
platform: linux
image_resource:
type: registry-image
source:
repository: nvidia/cuda
tag: 13.1.0-runtime-ubuntu24.04
run:
path: sh
args:
- -c
- |
# Verify GPU access
nvidia-smi
# Run your GPU workload
python train.py --use-gpu
- name: gpu-benchmark
plan:
- task: benchmark
tags: [gpu, gpu-type=nvidia, gpu-count=1] # More specific targeting
config:
platform: linux
image_resource:
type: registry-image
source:
repository: nvidia/cuda
tag: 13.1.0-base-ubuntu24.04
run:
path: nvidia-smi
# Check worker status
juju status worker
# Should show: "Worker ready (GPU: 1x NVIDIA)"
# Verify GPU tags in Concourse
fly -t local workers
# Worker should show tags: gpu, gpu-type=nvidia, gpu-count=1
nvidia/cuda:13.1.0-base-ubuntu24.04 - CUDA base (~174MB)nvidia/cuda:13.1.0-runtime-ubuntu24.04 - CUDA runtime (~1.38GB)nvidia/cuda:13.1.0-devel-ubuntu24.04 - CUDA development (~3.39GB)tensorflow/tensorflow:latest-gpu - TensorFlow with GPUpytorch/pytorch:latest - PyTorch with GPUWorker shows βGPU enabled but no GPU detectedβ
nvidia-sminvidia-smiContainer cannot access GPU
which nvidia-container-runtimecat /etc/containerd/config.tomlsudo systemctl restart containerdGPU not showing in task
nvidia-smi in task to debugfly -t local workersCause: Usually means PostgreSQL relation is missing (for web units).
Fix:
juju relate concourse-ci:postgresql postgresql:database
Check logs:
juju debug-log --include concourse-ci/0 --replay --no-tail | tail -50
# Or SSH and check systemd
juju ssh concourse-ci/0
sudo journalctl -u concourse-server -f
Common issues:
/var/lib/concourse/config.envweb-port configCheck worker status:
juju ssh concourse-ci/1 # Worker unit
sudo systemctl status concourse-worker
sudo journalctl -u concourse-worker -f
Common issues:
/var/lib/concourse/keys/sudo systemctl status containerdjuju ssh concourse-ci/0
sudo cat /var/lib/concourse/config.env
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Web Server β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β Web UI/API β β TSA β β Scheduler β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β β β β
β ββββββββββββββββ΄ββββββββββββββββββ β
β β β
ββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββ
β
β (SSH over TSA)
β
ββββββββββββββββββ΄βββββββββββββββββ
β β
βββββββΌβββββββ βββββββΌβββββββ
β Worker 1 β β Worker 2 β
ββββββββββββββ ββββββββββββββ
ββContainer ββ ββContainer ββ
ββRuntime ββ ββRuntime ββ
ββββββββββββββ ββββββββββββββ
ββββββββββββββ ββββββββββββββ
β¦ see https://concourse-ci.org/internals.html
/opt/concourse/: Concourse binaries/var/lib/concourse/: Data and configuration/var/lib/concourse/keys/: TSA and worker keys/var/lib/concourse/worker/: Worker runtime directoryconcourse-server.service: Web server (runs as concourse user)concourse-worker.service: Worker (runs as root)# Install charmcraft
sudo snap install charmcraft --classic
# Clone repository
git clone https://github.com/fourdollars/concourse-ci-machine.git
cd concourse-ci-machine
# Build charm
charmcraft pack
# Deploy locally
juju deploy ./concourse-ci-machine_amd64.charm
concourse-ci-machine/
βββ src/
β βββ charm.py # Main charm logic
βββ lib/
β βββ concourse_common.py # Shared utilities
β βββ concourse_installer.py # Installation logic
β βββ concourse_web.py # Web server management
β βββ concourse_worker.py # Worker management
βββ metadata.yaml # Charm metadata
βββ config.yaml # Configuration options
βββ charmcraft.yaml # Build configuration
βββ actions.yaml # Charm actions
βββ README.md # This file
fly -t prod login -c http://<ip>:8080 -u admin -p admin
# Use web UI to change password in team settings
Database credentials are passed securely via Juju relations, not environment variables.
Contributions are welcome! Please:
git checkout -b feature/amazing-feature)git commit -m 'Add amazing feature')git push origin feature/amazing-feature)This charm is licensed under the Apache 2.0 License. See LICENSE for details.