Introduction
Twerk is a distributed task execution system written in Rust — a port of Tork from Go. It lets you define jobs consisting of multiple tasks, each running inside its own container.
Why Twerk?
- Horizontally scalable — Add workers to handle more tasks
- Task isolation — Tasks run in containers with resource limits
- Multi-runtime — Docker, Podman, or Shell execution
- Retry with backoff — Configurable retry on failure
- Scheduled jobs — Cron-based scheduling with pause/resume
- Secrets management — Auto-redaction of sensitive values
- REST API — Full API for job, task, queue, node, and user management
- Health checks — Built-in liveness and readiness probes
Architecture
Client → Coordinator → Broker → Worker → Runtime (Docker/Podman/Shell)
↓
Datastore (PostgreSQL)
- Coordinator — Receives jobs, schedules tasks, manages state
- Worker — Executes tasks via the configured runtime
- Broker — Routes tasks between Coordinator and Workers (RabbitMQ or In-Memory)
- Datastore — Persists all job, task, and node state (PostgreSQL)
Modes
| Mode | Description |
|---|---|
standalone | All-in-one: Coordinator + Worker in a single process |
coordinator | API server that schedules work (requires workers) |
worker | Executes tasks by pulling from broker |
Next Steps
- Installation — Get Twerk running
- Quick Start — Run your first job
- CLI Reference — All available commands
Installation
Requirements
- Rust 1.75+ — For building from source
- Bash-compatible shell — For the zero-dependency shell runtime quick start
- Docker or Podman — Optional, for image-based task execution
- PostgreSQL — Optional, for persistence
- RabbitMQ — Optional, for distributed mode
Download Binary
# Check releases for your platform
curl -L https://github.com/runabol/twerk/releases/latest/download/twerk-linux-x86_64.tar.gz | tar xz
./twerk --help
Build from Source
git clone https://github.com/runabol/twerk.git
cd twerk
cargo build --release -p twerk-cli
./target/release/twerk --help
For a local first run from the repo root, the checked-in config.toml already points Twerk at the in-memory broker/datastore and shell runtime:
./target/release/twerk server start standalone
Set up PostgreSQL
docker run -d \
--name twerk-postgres \
-p 5432:5432 \
-e POSTGRES_PASSWORD=twerk \
-e POSTGRES_USER=twerk \
-e POSTGRES_DB=twerk \
postgres:15.3
Run migration:
TWERK_DATASTORE_TYPE=postgres \
TWERK_DATASTORE_POSTGRES_DSN="host=localhost user=twerk password=twerk dbname=twerk port=5432 sslmode=disable" \
./twerk migration
Set up RabbitMQ (Distributed Mode)
docker run -d \
--name twerk-rabbitmq \
-p 5672:5672 \
-p 15672:15672 \
rabbitmq:3-management
Access management UI at http://localhost:15672 (guest/guest).
Configuration
Twerk is configured via TOML files or environment variables:
# Environment variable format
TWERK_<SECTION>_<KEY>=value
# Examples
TWERK_LOGGING_LEVEL=debug
TWERK_BROKER_TYPE=rabbitmq
TWERK_DATASTORE_TYPE=postgres
TWERK_RUNTIME_TYPE=docker
See Configuration for full reference.
Next Steps
- Quick Start — Run your first job
- CLI Reference — All commands
Quick Start
Get Twerk running locally with no Postgres, RabbitMQ, Docker, or Podman.
Start Twerk
Use the local-friendly in-memory and shell settings:
TWERK_BROKER_TYPE=inmemory \
TWERK_DATASTORE_TYPE=inmemory \
TWERK_RUNTIME_TYPE=shell \
./twerk server start standalone
If you built from source inside this repository, the checked-in config.toml already uses the same settings, so ./target/release/twerk server start standalone works from the repo root.
Twerk starts on http://localhost:8000.
Create a Job
Create hello-shell.yaml:
name: hello shell
tasks:
- name: say hello
run: |
echo "hello from twerk"
Submit and Wait for Completion
curl -X POST 'http://localhost:8000/jobs?wait=true' \
-H "Content-Type: text/yaml" \
--data-binary @hello-shell.yaml
wait=true blocks until the job finishes, which makes the first-run flow much easier to verify.
Inspect the Run
curl http://localhost:8000/jobs
curl http://localhost:8000/jobs/<job-id>/log
Health Check
./twerk health
# or
curl http://localhost:8000/health
Distributed Mode
Run coordinator and workers separately when you want Postgres, RabbitMQ, and container-backed tasks:
# Terminal 1: Coordinator
TWERK_DATASTORE_TYPE=postgres \
TWERK_DATASTORE_POSTGRES_DSN="host=localhost user=twerk password=twerk dbname=twerk port=5432 sslmode=disable" \
TWERK_BROKER_TYPE=rabbitmq \
TWERK_BROKER_RABBITMQ_URL="amqp://guest:guest@localhost:5672/" \
./twerk server start coordinator
# Terminal 2: Worker
TWERK_BROKER_TYPE=rabbitmq \
TWERK_BROKER_RABBITMQ_URL="amqp://guest:guest@localhost:5672/" \
TWERK_RUNTIME_TYPE=docker \
./twerk server start worker
Container images require docker or podman. The zero-dependency quick start above uses the shell runtime instead.
Next Steps
- Jobs — Learn about job definitions
- Tasks — Configure task behavior
- CLI Reference — All available commands
- REST API — API reference
Architecture
Components
Coordinator
Tracks jobs, dispatches work to workers, handles retries and failures. Stateless and leaderless; does not run tasks.
Worker
Runs tasks via a runtime (Docker, Podman, or Shell).
Broker
Routes messages between Coordinator and Workers:
- RabbitMQ — Production-grade message broker
- In-Memory — For testing and single-node deployments
Datastore
Persists job and task state:
- PostgreSQL — Production database
- In-Memory — For testing
Runtime
Execution environment for tasks:
- Docker — Default, best isolation
- Podman — Daemonless Docker alternative
- Shell — Runs on host
Request Flow
Client → Coordinator → Broker → Worker → Runtime (Docker/Podman/Shell)
↓
Datastore
- Client submits job via REST API
- Coordinator stores job in Datastore
- Coordinator publishes tasks to Broker
- Worker receives tasks from Broker
- Worker executes tasks in containers
- Worker reports results back via Broker
- Coordinator updates job state in Datastore
Modes
| Mode | Coordinator | Worker |
|---|---|---|
standalone | ✓ | ✓ |
coordinator | ✓ | ✗ |
worker | ✗ | ✓ |
CLI Reference
Commands
twerk server start
Start Twerk in a specific mode.
twerk server start <MODE> [OPTIONS]
| Mode | Description |
|---|---|
standalone | All-in-one: Coordinator + Worker |
coordinator | API server, requires separate workers |
worker | Task executor, requires coordinator |
| Option | Description | Default |
|---|---|---|
--hostname <HOSTNAME> | Coordinator hostname for workers | none |
Config is loaded from TWERK_CONFIG or the default config search paths. There is no --config CLI flag.
twerk migration
Run database migrations.
twerk migration [OPTIONS]
| Option | Description |
|---|---|
-y, --yes | Skip confirmation prompt |
twerk migration reads the datastore type and Postgres DSN from config or TWERK_* environment variables.
twerk health
Check coordinator health.
twerk health [OPTIONS]
| Option | Description | Default |
|---|---|---|
-e, --endpoint <URL> | Coordinator endpoint | http://localhost:8000 |
twerk version
Show the current CLI version.
twerk version [--json]
Text-mode version discovery commands are clean endpoints: they print only the version line to stdout, keep stderr empty, and exit 0.
Supported forms:
twerk --version→twerk <VERSION>twerk version→twerk <VERSION>twerk run --version→twerk-run <VERSION>twerk migration --version→twerk-migration <VERSION>twerk health --version→twerk-health <VERSION>
Top-Level Flags
| Option | Description |
|---|---|
--json | Emit machine-readable JSON for help, version, parse errors, and command failures on stdout |
--help | Show CLI help |
--version | Show the current version |
JSON Behavior
- Help discovery commands such as
twerk --json,twerk --json --help,twerk help --json, andtwerk run --json --helpreturn JSON with a renderedcontentfield. - Version discovery commands such as
twerk --json --versionandtwerk version --jsonreturn JSON on stdout, keep stderr empty, and exit0. - JSON parse failures keep Clap exit code
2and write structured error JSON to stdout. - JSON command validation and runtime failures exit
1, write structured error JSON to stdout, and keep stderr empty.
Environment Variables
All configuration can be set via environment variables:
TWERK_<SECTION>_<KEY>=value
Examples:
TWERK_LOGGING_LEVEL=debugTWERK_BROKER_TYPE=rabbitmqTWERK_DATASTORE_TYPE=postgresTWERK_RUNTIME_TYPE=docker
See Configuration for full reference.
YAML Language Spec
This document defines the YAML shapes that Twerk currently accepts at the parser boundary, using shipped examples and core Rust schema types as evidence.
Scope
Twerk accepts two distinct YAML document families:
- Native job documents — parsed as
twerk_core::job::Job - ASL-style state machines — parsed as
twerk_core::asl::machine::StateMachine
Do not treat them as the same grammar. They live side-by-side in examples/, but they are different schemas.
Parser Contract
At the HTTP/parser boundary (crates/twerk-web/src/api/yaml.rs):
- Empty bodies are rejected.
- Bodies larger than
512 KiBare rejected. - NUL bytes are rejected.
- Duplicate YAML keys are rejected.
- Parser budgets are enforced with
max_depth = 64andmax_nodes = 10_000. - YAML must be valid UTF-8.
Native Job Document
Backed by crates/twerk-core/src/job.rs and crates/twerk-core/src/task.rs.
Minimal shape
Example-backed top-level fields include:
namedescriptioninputsoutputtasks
Examples:
examples/hello.yamlexamples/hello-shell.yamlexamples/bash-pipeline.yamlexamples/split_and_stitch.yaml
Tasks
Common task fields evidenced in shipped examples:
namevarimagerunentrypointenvfilesretrytimeoutprepostmountsparalleleachsubjob
Examples:
- simple task:
examples/hello.yaml - retry:
examples/retry.yaml,examples/bash-retry.yaml - timeout:
examples/timeout.yaml - map-heavy task:
examples/split_and_stitch.yaml
Maps
inputs
Top-level string map consumed by expressions like {{ inputs.key }}.
Evidence:
examples/split_and_stitch.yaml
env
Task-level string map. Appears on normal tasks and nested tasks.
Evidence:
examples/each.yamlexamples/bash-each.yamlexamples/split_and_stitch.yaml
files
Task-level string map from filename to inline file body.
Evidence:
examples/split_and_stitch.yaml
Not example-backed yet
The core schema supports additional top-level maps/collections, but shipped examples/*.yaml do not currently prove the user-authored syntax of all of them:
secretstagswebhooksscheduledefaults
They may exist in code and broader docs, but they are not all evidenced by shipped example YAML.
Control Structures
each
each contains:
list- optional
var - nested
task
Example-backed iteration placeholder forms currently seen in examples include legacy underscore aliases:
item_indexitem_valuemyitem_indexmyitem_valuenum_indexnum_valueitem_value_startitem_value_length
Evidence:
examples/each.yamlexamples/bash-each.yamlexamples/split_and_stitch.yaml
parallel
parallel contains a nested tasks list.
Evidence:
examples/parallel.yamlexamples/pokemon-benchmark.yamlexamples/twerk-massive-parallel.yamlexamples/subjob.yamlexamples/bash-subjob.yaml
subjob
subjob embeds another native-job-like task list and may include name and output.
Evidence:
examples/subjob.yamlexamples/bash-subjob.yaml
Retry and Timeout
Native task retry shape evidenced in examples:
retry:
limit: <integer>
Timeouts are evidenced as duration strings:
5s120s
Evidence:
examples/retry.yamlexamples/bash-retry.yamlexamples/timeout.yamlexamples/split_and_stitch.yaml
ASL-Style State Machines
Backed by crates/twerk-core/src/asl/.
Top-level ASL fields evidenced in shipped examples:
commentstartAtstates
Evidence:
examples/asl-hello.yamlexamples/asl-task-retry.yaml
ASL state forms currently evidenced by examples
type: passtype: tasknextend: true- task-state
retrylist with:errorEqualsintervalSecondsmaxAttemptsbackoffRate
Important Gaps and Ambiguities
run interpolation docs are inconsistent
website/src/examples.md says run is passed raw and is not evaluated. But shipped examples include run values containing {{ ... }} expressions. This spec does not claim runtime interpolation semantics beyond what the parser accepts: run is parsed as a string, and examples prove that strings containing template markers are accepted.
Mixed example families
examples/ contains both native job YAML and ASL YAML. Tooling and tests must parse them into the correct target type. Parsing every example as Job is not a valid contract.
Example-backed vs code-backed
This spec is intentionally conservative. If a shape is supported in code but not evidenced by shipped examples, call that out explicitly instead of bluffing.
Recommended Reading Order
website/src/QUICKSTART_YAML.mdfor a quick tour- this document for parser-backed shape constraints
website/src/examples.mdfor usage-oriented examples
Jobs
A job is a collection of tasks executed in order.
Minimal Example
name: my job
tasks:
- name: hello
image: alpine:latest
run: echo hello
Complete Job Reference
# ─── Identification ───────────────────────────────────────────────────────────
name: my job # Job name
description: Optional description # Job description
tags: [tag1, tag2] # Metadata tags
# ─── Input & Secrets ─────────────────────────────────────────────────────────
inputs: # Non-sensitive inputs
key: value
secrets: # Sensitive values (auto-redacted)
api_key: secret123
# ─── Task Defaults ────────────────────────────────────────────────────────────
defaults: # Applied to all tasks
retry:
limit: 3 # Max retry attempts
limits:
cpus: "1" # CPU limit
memory: "512m" # Memory limit
timeout: 10m # Task timeout
queue: default # Queue name
priority: 5 # 0-9, lower values run first
# ─── Tasks ───────────────────────────────────────────────────────────────────
tasks:
- name: first task
image: alpine:latest
run: echo hello
# ─── Scheduling ───────────────────────────────────────────────────────────────
schedule:
cron: "0 2 * * *" # Cron expression
# ─── Notifications ───────────────────────────────────────────────────────────
webhooks:
- url: https://example.com/hook
event: job.StateChange # or task.StateChange
if: "{{ job.state == 'COMPLETED' }}" # Conditional
# ─── Access Control ───────────────────────────────────────────────────────────
permissions:
- role: admin # Or: user: username
- user: someuser
# ─── Cleanup ──────────────────────────────────────────────────────────────────
autoDelete:
after: 24h # Delete after completion
Job States
| State | Description |
|---|---|
PENDING | Created, not yet scheduled |
SCHEDULED | Tasks queued |
RUNNING | Executing |
COMPLETED | All tasks finished |
FAILED | Task failed |
CANCELLED | Manually cancelled |
Cron Syntax
┌───────────── minute (0-59)
│ ┌───────────── hour (0-23)
│ │ ┌───────────── day of month (1-31)
│ │ │ ┌───────────── month (1-12)
│ │ │ │ ┌───────────── day of week (0-6)
│ │ │ │ │
* * * * *
Examples:
0 * * * *— Every hour0 2 * * *— Daily at 2 AM0/5 * * * *— Every 5 minutes0 0 * * 0— Weekly on Sunday
Next Steps
Tasks
Tasks are the unit of execution in Twerk.
Minimal Task
- name: my task
image: alpine:latest
run: echo hello
What Fields Support Expressions?
Expressions using {{ }} syntax are supported in these fields:
name— Task nameimage— Container imagevar— Output variable namequeue— Target queueif— Conditional executionenvvalues — Environment variablesfileskeys/values — Files to create
Note: The run field is NOT evaluated — it’s passed as raw shell script.
Complete Task Reference
# ─── Identification ───────────────────────────────────────────────────────────
name: my task # Supports {{ }} expressions
description: Optional description # Plain text only
# ─── Container ───────────────────────────────────────────────────────────────
image: ubuntu:mantic # Supports {{ }} expressions
cmd: ["/bin/sh", "-c"] # Override entrypoint
entrypoint: ["/bin/sh", "-c"] # Same as cmd
run: |
echo hello # RAW shell script - NO expression evaluation
# ─── Environment ──────────────────────────────────────────────────────────────
env: # Values support {{ }} expressions
KEY: value
TEMPLATE: '{{ inputs.key }}' # ✓ Works
files: # Keys and values support {{ }}
config.json: '{"key": "{{ inputs.value }}"}' # ✓ Works
# ─── Output ─────────────────────────────────────────────────────────────────
var: output_key # Supports {{ }} - store task output
# Access via {{ tasks.output_key }}
# ─── Conditions ──────────────────────────────────────────────────────────────
if: "{{ job.state == 'SCHEDULED' }}" # ✓ Works in if field
# ─── Routing ────────────────────────────────────────────────────────────────
queue: default # Supports {{ }} expressions
priority: 5
# ─── Execution Control ───────────────────────────────────────────────────────
timeout: 5m
retry:
limit: 3
# ─── Resources ────────────────────────────────────────────────────────────────
limits:
cpus: "0.5"
memory: "256m"
gpus: all
workdir: /app
# ─── Mounts ──────────────────────────────────────────────────────────────────
mounts:
- type: volume
target: /data
# ─── Pre/Post Tasks ─────────────────────────────────────────────────────────
pre:
- name: setup
image: alpine:latest
run: echo setup
post:
- name: cleanup
image: alpine:latest
run: echo cleanup
# ─── Parallel ────────────────────────────────────────────────────────────────
parallel:
tasks:
- name: a
image: alpine:latest
run: echo A
- name: b
image: alpine:latest
run: echo B
# ─── Each (Loop) ────────────────────────────────────────────────────────────
each:
list: '{{ fromJSON(inputs.items) }}' # Expression for list
concurrency: 2
task:
image: alpine:latest
env:
VALUE: '{{ item.value }}' # ✓ Works in each tasks
INDEX: '{{ item.index }}'
run: echo $VALUE
Supported Expression Syntax
Input/Secret References
env:
VALUE: '{{ inputs.my_input }}' # Job input
SECRET: '{{ secrets.my_secret }}' # Job secret (auto-redacted)
Each Loop Variables
each:
task:
env:
VALUE: '{{ item.value }}' # Current item value
INDEX: '{{ item.index }}' # Current index (0-based)
Built-in Functions
env:
JSON: '{{ fromJSON(inputs.json_string) }}'
SEQ: '{{ sequence(1, 5) }}' # [1, 2, 3, 4]
LEN: '{{ len(tasks.results) }}'
SPLIT: '{{ split("a,b,c", ",") }}' # ["a", "b", "c"]
Conditional with if
if: "{{ job.state == 'SCHEDULED' }}" # Job must be scheduled
if: "{{ job.state != 'FAILED' }}" # Job not failed
Task States
| State | Description |
|---|---|
CREATED | Task created |
PENDING | Queued for execution |
SCHEDULED | Assigned to worker |
RUNNING | Executing |
COMPLETED | Finished successfully |
FAILED | Failed |
CANCELLED | Cancelled |
STOPPED | Stopped |
SKIPPED | Skipped (conditional) |
Next Steps
- Runtimes — Docker, Podman, Shell
- Configuration — Full configuration reference
Runtimes
Twerk supports multiple execution environments for tasks.
Docker (Default)
Tasks run in isolated Docker containers using the bollard crate.
[runtime]
type = "docker"
Or via environment:
TWERK_RUNTIME_TYPE=docker
Docker-specific options:
[runtime.docker]
config = "" # Path to Docker config
privileged = false # Privileged container mode
image.ttl = "24h" # Image cache TTL
Podman
Daemonless Docker alternative:
[runtime]
type = "podman"
TWERK_RUNTIME_TYPE=podman
Podman-specific options:
[runtime.podman]
privileged = false
host.network = false # Use host network
Shell
Run directly on the host (for development/testing):
[runtime]
type = "shell"
TWERK_RUNTIME_TYPE=shell
Warning: Shell runtime executes arbitrary code on the host. Use only in trusted environments.
Shell-specific options:
[runtime.shell]
cmd = ["bash", "-c"] # Shell command
uid = "1000" # Run as specific user
gid = "1000" # Run with specific group
Environment Variables in Tasks
| Variable | Description |
|---|---|
TWERK_OUTPUT | Write task output here |
TWERK_TASK_ID | Current task ID |
TWERK_JOB_ID | Current job ID |
Next Steps
- Configuration — Full configuration reference
- REST API — API reference
Configuration
Twerk reads TOML configuration from files and environment variables.
Config File Locations
Twerk checks these locations in order:
./config.local.toml./config.toml./config.local.yaml./config.yaml./config.local.yml./config.yml~/twerk/config.toml~/twerk/config.yaml/etc/twerk/config.toml/etc/twerk/config.yaml
.yaml and .yml filenames are legacy compatibility names only. Their contents must still be valid TOML.
Or specify a file directly:
TWERK_CONFIG=/path/to/config.toml twerk server start standalone
Environment Variables
Override any setting with:
TWERK_<SECTION>_<KEY>=value
Example: TWERK_LOGGING_LEVEL=debug
Local Standalone Example
[logging]
level = "info"
format = "pretty"
[broker]
type = "inmemory"
[datastore]
type = "inmemory"
[coordinator]
address = "localhost:8000"
[worker]
address = "localhost:8001"
[runtime]
type = "shell"
[runtime.shell]
cmd = ["bash", "-c"]
uid = ""
gid = ""
This is the same shape as the repo-root config.toml used for the primary local docs journey.
Distributed Example
[broker]
type = "rabbitmq"
[broker.rabbitmq]
url = "amqp://guest:guest@localhost:5672/"
[datastore]
type = "postgres"
[datastore.postgres]
dsn = "host=localhost user=twerk password=twerk dbname=twerk port=5432 sslmode=disable"
[runtime]
type = "docker"
Use docker or podman when your task definitions include container images.
For a fuller reference, see configs/sample.config.toml in the repository.
Environment Variable Reference
| Config | Environment Variable |
|---|---|
logging.level | TWERK_LOGGING_LEVEL |
logging.format | TWERK_LOGGING_FORMAT |
broker.type | TWERK_BROKER_TYPE |
broker.rabbitmq.url | TWERK_BROKER_RABBITMQ_URL |
datastore.type | TWERK_DATASTORE_TYPE |
datastore.postgres.dsn | TWERK_DATASTORE_POSTGRES_DSN |
runtime.type | TWERK_RUNTIME_TYPE |
coordinator.address | TWERK_COORDINATOR_ADDRESS |
worker.address | TWERK_WORKER_ADDRESS |
Next Steps
- REST API — API reference
- Quick Start — Get started
REST API
Base URL: http://localhost:8000
Health
GET /health
{ "status": "UP", "version": "0.1.0" }
Jobs
| Method | Path | Description |
|---|---|---|
POST | /jobs | Submit a job |
POST | /jobs?wait=true | Submit and block until completion |
GET | /jobs | List jobs |
GET | /jobs/{id} | Get job details |
GET | /jobs/{id}/log | Fetch job logs |
POST | /jobs/{id}/cancel | Cancel a job |
PUT | /jobs/{id}/cancel | Cancel a job |
PUT | /jobs/{id}/restart | Restart a job |
Example request body:
name: hello shell
tasks:
- name: hello
run: echo "hello from twerk"
Tasks
| Method | Path | Description |
|---|---|---|
GET | /tasks/{id} | Get task details |
GET | /tasks/{id}/log | Fetch task logs |
Scheduled Jobs
| Method | Path | Description |
|---|---|---|
POST | /scheduled-jobs | Create a scheduled job |
GET | /scheduled-jobs | List scheduled jobs |
GET | /scheduled-jobs/{id} | Get a scheduled job |
PUT | /scheduled-jobs/{id}/pause | Pause a scheduled job |
PUT | /scheduled-jobs/{id}/resume | Resume a scheduled job |
DELETE | /scheduled-jobs/{id} | Delete a scheduled job |
Queues
| Method | Path | Description |
|---|---|---|
GET | /queues | List queues |
GET | /queues/{name} | Get queue details |
DELETE | /queues/{name} | Delete a queue |
System
| Method | Path | Description |
|---|---|---|
GET | /nodes | List nodes |
GET | /nodes/{id} | Get node details |
GET | /metrics | Fetch metrics |
POST | /users | Create a user |
Triggers
| Method | Path | Description |
|---|---|---|
GET | /api/v1/triggers | List triggers |
POST | /api/v1/triggers | Create a trigger |
GET | /api/v1/triggers/{id} | Get a trigger |
PUT | /api/v1/triggers/{id} | Update a trigger |
DELETE | /api/v1/triggers/{id} | Delete a trigger |
OpenAPI
GET /openapi.json
CLI Integration
# Submit and wait for completion
curl -X POST 'http://localhost:8000/jobs?wait=true' \
-H "Content-Type: text/yaml" \
--data-binary @examples/hello-shell.yaml
# Inspect the resulting job and logs
curl http://localhost:8000/jobs/$JOB_ID
curl http://localhost:8000/jobs/$JOB_ID/log
Next Steps
- Examples — Complete workflows
Examples
Real-world workflow examples.
Simple Job
name: hello world
tasks:
- name: say hello
image: ubuntu:mantic
run: echo hello world
Using Inputs
Inputs can be used in env values and other fields (but NOT in run):
name: input example
inputs:
message: hello world
count: 5
tasks:
- name: use inputs
image: alpine:latest
env:
MESSAGE: '{{ inputs.message }}'
COUNT: '{{ inputs.count }}'
run: |
for i in $(seq 1 $COUNT); do
echo "$MESSAGE"
done
Each Task (Loop)
Use each to run a task for each item in a list:
name: process items
inputs:
items: '[1, 2, 3, 4, 5]'
tasks:
- name: process each
each:
list: '{{ fromJSON(inputs.items) }}'
concurrency: 2
task:
image: alpine:latest
env:
ITEM: '{{ item.value }}'
INDEX: '{{ item.index }}'
run: echo "Item $ITEM at index $INDEX"
Parallel Tasks
Run multiple tasks concurrently:
name: parallel work
tasks:
- name: parallel parent
parallel:
tasks:
- name: task a
image: alpine:latest
run: echo "A done"
- name: task b
image: alpine:latest
run: sleep 2 && echo "B done"
- name: task c
image: alpine:latest
run: echo "C done"
Conditional Execution
Use if to conditionally run tasks:
name: conditional workflow
inputs:
environment: production
tasks:
- name: deploy
if: "{{ job.state == 'SCHEDULED' }}"
image: alpine:latest
run: echo "Deploying..."
Retry on Failure
name: with retry
tasks:
- name: may fail
retry:
limit: 3
image: alpine:latest
run: ./might-fail.sh
Resource Limits
name: limited task
tasks:
- name: constrained
limits:
cpus: "0.5"
memory: "256m"
image: alpine:latest
run: echo hello
Mounts
Share data between pre/post tasks and main task:
name: with mounts
tasks:
- name: process
image: jrottenberg/ffmpeg:3.4-alpine
run: ffmpeg -i /tmp/input.mov /tmp/output.mp4
mounts:
- type: volume
target: /tmp
pre:
- name: download
image: alpine:latest
run: wget http://example.com/video.mov -O /tmp/input.mov
Scheduled Job
name: daily backup
schedule:
cron: "0 2 * * *"
tasks:
- name: backup
image: postgres:15
run: pg_dump -a mydb > /backups/dump.sql
Environment Variable Reference
| Variable | Description |
|---|---|
TWERK_OUTPUT | Write task output here |
Supported Expressions
Works in env values, image, queue, name, var, if:
{{ inputs.key }}— Job inputs{{ secrets.key }}— Job secrets{{ item.value }}— Each loop item value{{ item.index }}— Each loop item index
Built-in functions:
fromJSON(string)— Parse JSON stringtoJSON(value)— Convert to JSONsequence(start, stop)— Generate integer rangesplit(string, delimiter)— Split stringlen(array)— Array lengthcontains(array, item)— Check membership
Works in if condition:
job.state— Current job statejob.id— Job IDjob.name— Job nametask.state— Current task statetask.id— Task ID
Note: The run field is NOT evaluated — it’s passed as raw shell script to the container.