Mounting Folders Into Tasks
Learn how to automatically mount any folder into your Concourse CI tasks - zero configuration, maximum flexibility
- How the automatic folder discovery system works
- Mounting read-only folders for datasets, dependencies, and configs
- Creating writable folders for outputs, caches, and artifacts
- Using multiple folders in complex workflows
The Magic of /srv
The Concourse CI Machine charm automatically discovers and mounts any folder under /srv in your worker containers. This simple convention gives you incredible flexibility:
š Zero Configuration
Just create a folder under /srv and it's automatically available in all tasks
š Safe by Default
All folders are read-only unless explicitly marked writable with _writable suffix
ā” Works Everywhere
GPU workers, CPU workers, auto mode, distributed mode - all supported
šÆ No Pipeline Changes
Your existing pipelines work as-is. Just reference the folder paths.
Folder Naming Rules
Understanding the naming convention is key to using the system effectively:
| Folder Name | Mount Mode | Use Case |
|---|---|---|
/srv/datasets |
Read-only | Training data, reference files |
/srv/models |
Read-only | Pre-trained models, weights |
/srv/configs |
Read-only | Configuration files, schemas |
/srv/outputs_writable |
Read-write | Task results, trained models |
/srv/cache_rw |
Read-write | Build cache, temporary files |
/srv/artifacts_writable |
Read-write | Build artifacts, reports |
_writable or _rw get write permissions. Everything else is read-only for data safety.
Quick Start: Mount Your First Folder
1 Deploy a Concourse CI worker
# Deploy web + worker
juju deploy concourse-ci-machine --channel edge web --config mode=web
juju deploy concourse-ci-machine --channel edge worker --config mode=worker
juju integrate web:tsa worker:flight
# Or use auto mode (simpler)
juju deploy concourse-ci-machine --channel edge concourse --config mode=auto -n 2
2 Find your worker's LXC container
# List LXC containers
lxc list
# Look for containers starting with "juju-"
# Example: juju-abc123-2 (worker unit)
3 Mount a folder from your host machine
# Create dataset directory on host
mkdir -p /data/my-datasets
echo "Hello from the host!" > /data/my-datasets/test.txt
# Mount it to /srv/datasets in the container (read-only)
lxc config device add juju-abc123-2 datasets disk \
source=/data/my-datasets \
path=/srv/datasets \
readonly=true
# Verify the mount
lxc exec juju-abc123-2 -- ls -la /srv/datasets/
4 Test it in a Concourse CI task
jobs:
- name: test-folder-mount
plan:
- task: verify-mount
config:
platform: linux
image_resource:
type: registry-image
source:
repository: busybox
run:
path: sh
args:
- -c
- |
echo "=== Checking /srv ==="
ls -la /srv/
echo "\n=== Reading test file ==="
cat /srv/datasets/test.txt
echo "\nā
Mount verification successful!"
Real-World Example: ML Training Pipeline
Let's build a complete machine learning workflow with multiple folders:
Step 1: Set Up Multiple Folders
# On host: Create directory structure
mkdir -p /data/ml-workflow/{datasets,pretrained,outputs}
# Add some example data
echo "Training data goes here" > /data/ml-workflow/datasets/train.txt
echo "Pretrained model" > /data/ml-workflow/pretrained/model.pth
# Mount datasets (read-only)
lxc config device add juju-abc123-2 datasets disk \
source=/data/ml-workflow/datasets \
path=/srv/datasets \
readonly=true
# Mount pretrained models (read-only)
lxc config device add juju-abc123-2 pretrained disk \
source=/data/ml-workflow/pretrained \
path=/srv/pretrained \
readonly=true
# Mount outputs directory (WRITABLE - note the suffix!)
lxc config device add juju-abc123-2 outputs disk \
source=/data/ml-workflow/outputs \
path=/srv/outputs_writable
# Verify all mounts
lxc exec juju-abc123-2 -- ls -la /srv/
Step 2: Create the Pipeline
jobs:
- name: train-model
plan:
- task: training
tags: [cuda] # GPU worker
config:
platform: linux
image_resource:
type: registry-image
source:
repository: pytorch/pytorch
tag: latest
run:
path: python3
args:
- -c
- |
import os
# Read from read-only folders
datasets_dir = '/srv/datasets'
pretrained_dir = '/srv/pretrained'
print(f"Loading data from {datasets_dir}")
print(f"Available files: {os.listdir(datasets_dir)}")
print(f"\nLoading pretrained model from {pretrained_dir}")
print(f"Available models: {os.listdir(pretrained_dir)}")
# Write to writable folder
output_dir = '/srv/outputs_writable'
output_file = f'{output_dir}/trained-model.pth'
print(f"\nTraining model...")
# Your training code here
print(f"Saving model to {output_file}")
with open(output_file, 'w') as f:
f.write("Trained model data")
print("ā
Training complete!")
Step 3: Retrieve Results
# After pipeline runs, check outputs on host
ls -la /data/ml-workflow/outputs/
cat /data/ml-workflow/outputs/trained-model.pth
Common Use Cases
Build Cache for Faster Builds
# Create build cache directory
lxc config device add juju-abc123-2 cache disk \
source=/cache/build-cache \
path=/srv/cache_rw
# Use in pipeline
jobs:
- name: build-app
plan:
- task: compile
config:
platform: linux
image_resource:
type: registry-image
source: {repository: golang, tag: latest}
run:
path: sh
args:
- -c
- |
export GOCACHE=/srv/cache_rw/go-build
export GOMODCACHE=/srv/cache_rw/go-mod
# Subsequent builds will be much faster!
go build -o app ./...
Shared Configuration Files
# Mount shared configs (read-only)
lxc config device add juju-abc123-2 configs disk \
source=/configs/shared \
path=/srv/configs \
readonly=true
jobs:
- name: deploy
plan:
- task: deployment
config:
platform: linux
image_resource:
type: registry-image
source: {repository: ubuntu}
run:
path: bash
args:
- -c
- |
# Read shared configuration
CONFIG_FILE=/srv/configs/deployment.yaml
if [ -f "$CONFIG_FILE" ]; then
echo "Using config from $CONFIG_FILE"
cat "$CONFIG_FILE"
else
echo "Config not found!"
exit 1
fi
Test Artifacts and Reports
# Mount reports directory (writable)
lxc config device add juju-abc123-2 reports disk \
source=/reports/test-results \
path=/srv/reports_writable
jobs:
- name: run-tests
plan:
- task: test
config:
platform: linux
image_resource:
type: registry-image
source: {repository: node, tag: latest}
run:
path: sh
args:
- -c
- |
# Run tests and save reports
npm test -- --reporter=json > /srv/reports_writable/test-results.json
echo "Tests complete. Results saved to /srv/reports_writable/"
Advanced: Multiple Workers with Different Folders
Each worker can have its own set of mounted folders:
# GPU Worker 1: ML datasets
lxc config device add juju-abc123-2 datasets disk \
source=/data/ml-datasets \
path=/srv/datasets \
readonly=true
# CPU Worker 2: Build tools
lxc config device add juju-xyz789-3 tools disk \
source=/tools/build \
path=/srv/build-tools \
readonly=true
# Use tags to target specific workers
jobs:
- name: ml-training
plan:
- task: train
tags: [cuda] # Routes to GPU worker with /srv/datasets
# ...
- name: compile
plan:
- task: build
tags: [cpu] # Routes to CPU worker with /srv/build-tools
# ...
Monitoring Folder Status
The worker automatically reports mounted folder counts in its status:
juju status worker
# Example output:
# worker/0 active idle 10.0.0.5 Worker ready (3 folders: 2 RO, 1 RW)
# āā Folder status
Troubleshooting
Folder not visible in tasks
Check 1: Path must be under /srv
# ā
Correct - under /srv
path=/srv/datasets
# ā Wrong - not under /srv
path=/mnt/datasets
Check 2: Verify LXC mount
lxc config device show juju-abc123-2
lxc exec juju-abc123-2 -- ls -la /srv/
Cannot write to writable folder
Check 1: Folder name must end with _writable or _rw
# ā
Writable
path=/srv/outputs_writable
path=/srv/cache_rw
# ā Read-only (no suffix)
path=/srv/outputs
Check 2: LXC device must not be readonly
# ā
Correct for writable folder
lxc config device add ... \
path=/srv/outputs_writable
# No readonly=true!
# ā Wrong - conflicts with _writable suffix
lxc config device add ... \
path=/srv/outputs_writable \
readonly=true # Don't do this!
Permission denied errors
# Make files readable by all users
chmod -R a+rX /data/my-datasets/
# For writable folders, ensure write permissions
chmod -R 777 /data/outputs/
What You've Accomplished
- ā
Understanding the
/srvautomatic discovery system - ā Mounting read-only folders for datasets and dependencies
- ā Creating writable folders for outputs and caches
- ā Building complex workflows with multiple folders
- ā Troubleshooting common mounting issues
Next Steps
- Shared Storage - Share binaries across multiple workers for faster upgrades
- Scale Workers - Add more workers to handle increased load
- Container Runtime Deep Dive - Understand how the OCI wrapper works