Complete configuration options for Concourse CI Machine Charm
This reference lists all configuration options available for the Concourse CI Machine Charm. Configuration is managed through Juju's juju config command. Changes to most options trigger automatic service restarts with zero downtime.
juju config <application> --reset <option> to restore defaults.
| Option | Type | Default | Description |
|---|---|---|---|
mode |
string | auto |
Deployment mode for this unit: • auto: Leader runs web server, non-leaders run workers (recommended for multi-unit)• web: Only run web server• worker: Only run worker |
version |
string | "" (latest) |
Concourse CI version to deploy (e.g., 8.0.1). Leave empty to automatically use the latest stable release from GitHub. |
shared-storage |
string | none |
Shared storage mode for LXC environments: • none: Disable shared storage, each unit downloads binaries independently• lxc: Enable LXC-mounted shared storage (requires .lxc_shared_storage marker file and manual LXC disk device setup) |
| Option | Type | Default | Description |
|---|---|---|---|
web-port |
int | 8080 |
Port for Concourse web UI and API server. Supports dynamic changes with automatic service restart. Privileged ports (< 1024) are supported via CAP_NET_BIND_SERVICE. |
external-url |
string | "" (auto-detect) |
External URL for Concourse web UI (used for redirects and webhooks). If not set, automatically detects and uses http://<unit-ip>:<web-port>. Set this to your actual external URL if behind a proxy, load balancer, or NAT. |
tls-enabled |
boolean | false |
Enable TLS/HTTPS for Concourse web UI. Currently requires manual TLS certificate configuration (future enhancement will support TLS relations). |
initial-admin-username |
string | admin |
Initial admin username for Concourse authentication. Password is auto-generated and stored in Juju peer relation data (retrieve with juju run <app>/leader get-admin-password). |
| Option | Type | Default | Description |
|---|---|---|---|
worker-procs |
int | 1 |
Number of worker processes to spawn on this unit. Controls parallelism for job execution. Increasing this value allows more concurrent tasks but requires more system resources. |
tag |
string | "" |
Comma-separated list of tags to assign to this worker. Tags are added to CONCOURSE_TAG and merged with any GPU-generated tags (e.g., cuda, rocm). Example: "gpu,high-mem,ssd" |
container-placement-strategy |
string | volume-locality |
Container placement strategy for worker resource caching: • volume-locality: Place containers near cached volumes (recommended)• random: Randomly distribute containers• fewest-build-containers: Balance load across workers |
max-concurrent-downloads |
int | 10 |
Maximum number of concurrent resource downloads per worker. Higher values improve throughput but increase network and disk I/O load. |
| Option | Type | Default | Description |
|---|---|---|---|
containerd-dns-proxy-enable |
boolean | false |
Enable containerd DNS proxy for container name resolution. When false, containers use external DNS servers directly (specified by containerd-dns-server). |
containerd-dns-server |
string | 1.1.1.1,8.8.8.8 |
Comma-separated list of DNS servers for containerd containers. Used when containerd-dns-proxy-enable is false. Example: "1.1.1.1,8.8.8.8" |
| Option | Type | Default | Description |
|---|---|---|---|
compute-runtime |
string | none |
GPU compute runtime to enable: • none: No GPU support (default)• cuda: Enable NVIDIA CUDA GPU support (requires NVIDIA GPU hardware and host drivers)• rocm: Enable AMD ROCm GPU support (requires AMD GPU hardware)When enabled, automatically installs container toolkit and configures GPU passthrough. Worker will be tagged with GPU capabilities for job targeting. |
gpu-device-ids |
string | all |
GPU device IDs to expose to worker (comma-separated). Use "all" to expose all GPUs, or specify devices like "0,1". Only used when compute-runtime is set to cuda or rocm. |
compute-runtime) requires additional setup:amdgpu kernel module)lxc config device add <container> gpu0 gpu)
| Option | Type | Default | Description |
|---|---|---|---|
log-level |
string | info |
Logging level for Concourse components. Valid values: debug, info, warn, error. Higher verbosity useful for troubleshooting but generates more log data. |
enable-metrics |
boolean | false |
Enable Prometheus metrics endpoint on port 9391 and per-job status exporter on port 9358. When enabled, installs and runs concourse-exporter service that exposes job-level metrics. Integrate with Prometheus using juju integrate <app>:monitoring prometheus:target. |
Configure HashiCorp Vault for credential management in Concourse CI pipelines. All Vault options are optional and only used when vault-url is set.
| Option | Type | Default | Description |
|---|---|---|---|
vault-url |
string | "" |
URL of the Vault server. If set, enables Vault credential management. Example: https://vault.example.com:8200 |
vault-auth-backend |
string | "" |
Vault authentication backend (e.g., approle, token). |
vault-auth-backend-max-ttl |
string | "" |
Maximum TTL (Time To Live) for the Vault authentication backend token. Example: "1h", "30m" |
vault-auth-param |
string | "" |
Comma-separated key-value pairs for the selected auth backend. Example: "role_id:...,secret_id:..." |
vault-ca-cert |
string | "" |
Path to a PEM-encoded CA certificate file to use for TLS verification to Vault. |
vault-client-cert |
string | "" |
Path to a PEM-encoded client certificate for TLS authentication to Vault. |
vault-client-key |
string | "" |
Path to an unencrypted, PEM-encoded private key for TLS authentication to Vault. |
vault-client-token |
string | "" |
Vault client token for authentication. |
vault-lookup-templates |
string | "" |
Vault lookup templates for custom secret path resolution. |
vault-namespace |
string | "" |
Vault namespace for multi-tenancy (Vault Enterprise feature). |
vault-path-prefix |
string | "" |
Prefix for all secret paths in Vault. Example: "/concourse/my-team" |
vault-shared-path |
string | "" |
Shared path for Vault secrets accessible across teams. |
# Deploy web server on custom port
juju deploy concourse-ci-machine web --channel edge \
--config mode=web \
--config web-port=8080 \
--config external-url=https://ci.example.com
# Deploy worker with NVIDIA GPU support
juju deploy concourse-ci-machine gpu-worker --channel edge \
--config mode=worker \
--config compute-runtime=cuda \
--config gpu-device-ids=all \
--config tag="cuda,gpu"
# Deploy worker with AMD ROCm GPU support
juju deploy concourse-ci-machine rocm-worker --channel edge \
--config mode=worker \
--config compute-runtime=rocm \
--config tag="rocm,amd-gpu"
# Deploy worker with increased parallelism
juju deploy concourse-ci-machine worker --channel edge \
--config mode=worker \
--config worker-procs=4 \
--config max-concurrent-downloads=20 \
--config tag="high-perf,ssd"
# Enable Prometheus metrics
juju config concourse-ci enable-metrics=true
juju integrate concourse-ci:monitoring prometheus:target
# Configure Vault for credential management
juju config concourse-ci \
vault-url=https://vault.example.com:8200 \
vault-auth-backend=approle \
vault-auth-param="role_id:abc123,secret_id:def456" \
vault-path-prefix="/concourse/prod"
# View all configuration options and current values
juju config concourse-ci
# View specific option
juju config concourse-ci web-port
# Reset option to default
juju config concourse-ci --reset web-port
Most configuration changes take effect immediately with automatic service restart:
web-port, log-level, worker-procs, compute-runtimeversion (requires upgrade action after config change)mode (cannot be changed after deployment)