Architecture Reference

Complete system architecture and component specification

Overview

The Concourse CI Machine Charm deploys Concourse CI components on bare metal, VMs, or LXD containers managed by Juju. The architecture consists of three primary components: Web Server (UI, API, TSA, Scheduler), Worker (container runtime, job execution), and PostgreSQL (data persistence). Components communicate via HTTP/HTTPS (UI/API) and SSH (TSA for worker registration).

Note: This reference documents the charm's architecture. For Concourse CI's internal architecture, see Concourse CI Internals.

Component Diagram

PostgreSQL Database :5432 TCP Web Server (ATC) Web UI API :8080 TSA (SSH Gateway) :2222 Scheduler (Job Queue) Internal SSH/TSA SSH/TSA Worker 1 containerd Container Runtime Worker 2 containerd Container Runtime
Architecture Overview: The web server (ATC) coordinates all operations. PostgreSQL stores pipeline definitions, build history, and resource metadata. Workers connect via SSH to TSA and execute tasks in isolated containers. The scheduler assigns jobs to available workers based on tags and resource requirements.

Core Components

Web Server

The web server component hosts the Concourse web node, which provides the UI, API, TSA (worker gateway), and job scheduler.

Subcomponent Port Protocol Purpose
Web UI Configurable (default 8080) HTTP/HTTPS Browser-based user interface for pipeline management, build visualization, and team administration
API Same as Web UI HTTP/HTTPS RESTful API for fly CLI, webhooks, and programmatic access to pipelines, jobs, and resources
TSA 2222 SSH Transportation Security Administration - SSH gateway for worker authentication and registration
Scheduler Internal N/A Job scheduling, resource checking, container placement, and build orchestration
Metrics 9391 HTTP Prometheus metrics endpoint (optional, always exposed)

User: concourse (UID: dynamically allocated by Juju)
Service: concourse-server.service
Configuration: /var/lib/concourse/config.env
Data Directory: /var/lib/concourse/

Worker

The worker component executes build tasks in isolated containers using containerd runtime. Workers connect to the web server via TSA and receive job assignments from the scheduler.

Subcomponent Port Protocol Purpose
Worker Process N/A N/A Registers with TSA, executes build tasks, manages container lifecycle
Container Runtime N/A N/A containerd with runc OCI runtime for task container isolation
Garden 7777 HTTP Concourse's container management API (internal, listens on localhost)
Baggageclaim 7788 HTTP Volume management API for resource caching (internal, listens on localhost)
Metrics 9391 HTTP Prometheus metrics endpoint (optional, always exposed)

User: root (required for container management)
Service: concourse-worker.service
Configuration: /var/lib/concourse/worker-config.env (or /mnt/concourse-shared/worker-config.env in shared storage mode)
Data Directory: /var/lib/concourse/worker/

Note: Workers run as root because containerd requires privileged operations for namespace creation, cgroup management, and network configuration. Task containers are still isolated using Linux namespaces, cgroups, and seccomp profiles.

PostgreSQL (External)

PostgreSQL database stores all Concourse persistent data. Provided by external PostgreSQL charm connected via Juju relation.

Data Type Description
Pipelines Pipeline configurations, job definitions, resource types
Builds Build metadata, logs, statuses, timestamps
Resources Resource versions, metadata, check history
Authentication User accounts, teams, roles, OAuth tokens
Workers Worker registration, heartbeat, tags, state

Required Version: PostgreSQL 16/stable
Connection: Via Juju postgresql_client interface using Juju secrets
Default Port: 5432

Key Directories

Path Purpose Owner Permissions
/opt/concourse/ Concourse binaries (concourse, fly) root:root 755
/opt/bin/ Custom wrapper scripts (runc-wrapper, runc-gpu-wrapper) root:root 755
/var/lib/concourse/ Data, configuration, keys, worker runtime concourse:concourse 755
/var/lib/concourse/keys/ TSA host key, TSA public key, worker keys, session signing key concourse:concourse 700
/var/lib/concourse/worker/ Worker runtime directory (volumes, containers, state) root:root 755
/var/lib/concourse/bin/ Symlink to custom OCI runtime wrapper (runc) root:root 755
/var/log/concourse-ci.log Charm-generated logs (installation, upgrades, errors) root:root 644
/srv/ Host folders for automatic mounting into task containers varies varies
/etc/containerd/ containerd configuration (config.toml) root:root 755
Shared Storage Mode: When shared-storage=lxc, /var/lib/concourse/ is mounted from host via LXC disk device. Binaries are shared across units, reducing disk usage by ~62% (150MB → 60MB per worker).

Systemd Services

concourse-server.service (Web Server)

[Unit]
Description=Concourse CI Web Server
After=network-online.target postgresql.service
Wants=network-online.target

[Service]
Type=simple
User=concourse
Group=concourse
EnvironmentFile=/var/lib/concourse/config.env
ExecStart=/opt/concourse/concourse web
Restart=on-failure
RestartSec=10s
AmbientCapabilities=CAP_NET_BIND_SERVICE
NoNewPrivileges=true

[Install]
WantedBy=multi-user.target

Key Features:

concourse-worker.service (Worker)

[Unit]
Description=Concourse CI Worker
After=network-online.target containerd.service
Requires=containerd.service
Wants=network-online.target

[Service]
Type=simple
User=root
Group=root
EnvironmentFile=/var/lib/concourse/worker-config.env
ExecStart=/opt/concourse/concourse worker
Restart=on-failure
RestartSec=10s

[Install]
WantedBy=multi-user.target

Key Features:

containerd.service (Container Runtime)

Provided by Ubuntu's containerd package. Managed by systemd, configuration at /etc/containerd/config.toml.

Configuration Files

Web Server: /var/lib/concourse/config.env

# Example configuration (generated by charm)
CONCOURSE_BIND_IP=0.0.0.0
CONCOURSE_BIND_PORT=8080
CONCOURSE_EXTERNAL_URL=http://10.0.0.5:8080
CONCOURSE_POSTGRES_HOST=10.0.0.10
CONCOURSE_POSTGRES_PORT=5432
CONCOURSE_POSTGRES_DATABASE=concourse
CONCOURSE_POSTGRES_USER=concourse_user
CONCOURSE_POSTGRES_PASSWORD=<from-juju-secret>
CONCOURSE_SESSION_SIGNING_KEY=/var/lib/concourse/keys/session_signing_key
CONCOURSE_TSA_HOST_KEY=/var/lib/concourse/keys/tsa_host_key
CONCOURSE_TSA_AUTHORIZED_KEYS=/var/lib/concourse/keys/authorized_worker_keys
CONCOURSE_ADD_LOCAL_USER=admin:<generated-password>
CONCOURSE_MAIN_TEAM_LOCAL_USER=admin
CONCOURSE_LOG_LEVEL=info

Worker: /var/lib/concourse/worker-config.env

# Example configuration (generated by charm)
PATH=/opt/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
CONCOURSE_WORK_DIR=/var/lib/concourse/worker
CONCOURSE_TSA_HOST=10.0.0.5:2222
CONCOURSE_TSA_PUBLIC_KEY=/var/lib/concourse/keys/tsa_host_key.pub
CONCOURSE_TSA_WORKER_PRIVATE_KEY=/var/lib/concourse/keys/worker_key
CONCOURSE_GARDEN_BIND_PORT=7777
CONCOURSE_BAGGAGECLAIM_BIND_PORT=7788
CONCOURSE_RUNTIME=containerd
CONCOURSE_TAG=cuda,gpu-count=1
CONCOURSE_LOG_LEVEL=info
Critical: Worker configuration sets PATH=/opt/bin:... to prioritize custom OCI runtime wrappers (runc-wrapper, runc-gpu-wrapper) for folder mounting and GPU passthrough.

Network Communication

Source Destination Port Protocol Purpose
User Browser Web Server 8080 (configurable) HTTP/HTTPS Web UI access, pipeline management
fly CLI Web Server 8080 (configurable) HTTP/HTTPS API calls, pipeline operations, build logs
Worker Web Server (TSA) 2222 SSH Worker registration, heartbeat, job assignment
Web Server PostgreSQL 5432 TCP Database queries, persistence
Prometheus Web Server / Worker 9391 HTTP Metrics scraping
Prometheus Web Server 9358 HTTP Per-job metrics (concourse-exporter)

Security Model

Authentication

Isolation

Network Security

Deployment Variations

mode=auto (Single Application)

concourse-ci Application Unit 0 (Leader) Web Server • UI/API • TSA • Scheduler Unit 1 (Follower) Worker • containerd • Task execution Peer Relation (Auto key distribution) PostgreSQL Database :5432 postgresql relation

mode=web + mode=worker (Multiple Applications)

web Application Unit 0 Web Server • UI/API • TSA • Scheduler worker Application Unit 0 Worker • containerd • Task execution tsa / flight relation (SSH keys) PostgreSQL Database :5432 postgresql relation
Deployment Flexibility: The mode=auto pattern is recommended for most deployments as it provides automatic scaling and zero-configuration key distribution. Use separate web + worker applications when you need independent scaling of workers or want to deploy workers in different environments/regions.

Further Reading