Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Twerk is a distributed task execution system written in Rust — a port of Tork from Go. It lets you define jobs consisting of multiple tasks, each running inside its own container.

Why Twerk?

  • Horizontally scalable — Add workers to handle more tasks
  • Task isolation — Tasks run in containers with resource limits
  • Multi-runtime — Docker, Podman, or Shell execution
  • Retry with backoff — Configurable retry on failure
  • Scheduled jobs — Cron-based scheduling with pause/resume
  • Secrets management — Auto-redaction of sensitive values
  • REST API — Full API for job, task, queue, node, and user management
  • Health checks — Built-in liveness and readiness probes

Architecture

Client → Coordinator → Broker → Worker → Runtime (Docker/Podman/Shell)
                ↓
            Datastore (PostgreSQL)
  • Coordinator — Receives jobs, schedules tasks, manages state
  • Worker — Executes tasks via the configured runtime
  • Broker — Routes tasks between Coordinator and Workers (RabbitMQ or In-Memory)
  • Datastore — Persists all job, task, and node state (PostgreSQL)

Modes

ModeDescription
standaloneAll-in-one: Coordinator + Worker in a single process
coordinatorAPI server that schedules work (requires workers)
workerExecutes tasks by pulling from broker

Next Steps

Installation

Requirements

  1. Rust 1.75+ — For building from source
  2. Bash-compatible shell — For the zero-dependency shell runtime quick start
  3. Docker or Podman — Optional, for image-based task execution
  4. PostgreSQL — Optional, for persistence
  5. RabbitMQ — Optional, for distributed mode

Download Binary

# Check releases for your platform
curl -L https://github.com/runabol/twerk/releases/latest/download/twerk-linux-x86_64.tar.gz | tar xz
./twerk --help

Build from Source

git clone https://github.com/runabol/twerk.git
cd twerk
cargo build --release -p twerk-cli
./target/release/twerk --help

For a local first run from the repo root, the checked-in config.toml already points Twerk at the in-memory broker/datastore and shell runtime:

./target/release/twerk server start standalone

Set up PostgreSQL

docker run -d \
  --name twerk-postgres \
  -p 5432:5432 \
  -e POSTGRES_PASSWORD=twerk \
  -e POSTGRES_USER=twerk \
  -e POSTGRES_DB=twerk \
  postgres:15.3

Run migration:

TWERK_DATASTORE_TYPE=postgres \
TWERK_DATASTORE_POSTGRES_DSN="host=localhost user=twerk password=twerk dbname=twerk port=5432 sslmode=disable" \
./twerk migration

Set up RabbitMQ (Distributed Mode)

docker run -d \
  --name twerk-rabbitmq \
  -p 5672:5672 \
  -p 15672:15672 \
  rabbitmq:3-management

Access management UI at http://localhost:15672 (guest/guest).

Configuration

Twerk is configured via TOML files or environment variables:

# Environment variable format
TWERK_<SECTION>_<KEY>=value

# Examples
TWERK_LOGGING_LEVEL=debug
TWERK_BROKER_TYPE=rabbitmq
TWERK_DATASTORE_TYPE=postgres
TWERK_RUNTIME_TYPE=docker

See Configuration for full reference.

Next Steps

Quick Start

Get Twerk running locally with no Postgres, RabbitMQ, Docker, or Podman.

Start Twerk

Use the local-friendly in-memory and shell settings:

TWERK_BROKER_TYPE=inmemory \
TWERK_DATASTORE_TYPE=inmemory \
TWERK_RUNTIME_TYPE=shell \
./twerk server start standalone

If you built from source inside this repository, the checked-in config.toml already uses the same settings, so ./target/release/twerk server start standalone works from the repo root.

Twerk starts on http://localhost:8000.

Create a Job

Create hello-shell.yaml:

name: hello shell
tasks:
  - name: say hello
    run: |
      echo "hello from twerk"

Submit and Wait for Completion

curl -X POST 'http://localhost:8000/jobs?wait=true' \
  -H "Content-Type: text/yaml" \
  --data-binary @hello-shell.yaml

wait=true blocks until the job finishes, which makes the first-run flow much easier to verify.

Inspect the Run

curl http://localhost:8000/jobs
curl http://localhost:8000/jobs/<job-id>/log

Health Check

./twerk health
# or
curl http://localhost:8000/health

Distributed Mode

Run coordinator and workers separately when you want Postgres, RabbitMQ, and container-backed tasks:

# Terminal 1: Coordinator
TWERK_DATASTORE_TYPE=postgres \
TWERK_DATASTORE_POSTGRES_DSN="host=localhost user=twerk password=twerk dbname=twerk port=5432 sslmode=disable" \
TWERK_BROKER_TYPE=rabbitmq \
TWERK_BROKER_RABBITMQ_URL="amqp://guest:guest@localhost:5672/" \
./twerk server start coordinator

# Terminal 2: Worker
TWERK_BROKER_TYPE=rabbitmq \
TWERK_BROKER_RABBITMQ_URL="amqp://guest:guest@localhost:5672/" \
TWERK_RUNTIME_TYPE=docker \
./twerk server start worker

Container images require docker or podman. The zero-dependency quick start above uses the shell runtime instead.

Next Steps

Architecture

Components

Coordinator

Tracks jobs, dispatches work to workers, handles retries and failures. Stateless and leaderless; does not run tasks.

Worker

Runs tasks via a runtime (Docker, Podman, or Shell).

Broker

Routes messages between Coordinator and Workers:

  • RabbitMQ — Production-grade message broker
  • In-Memory — For testing and single-node deployments

Datastore

Persists job and task state:

  • PostgreSQL — Production database
  • In-Memory — For testing

Runtime

Execution environment for tasks:

  • Docker — Default, best isolation
  • Podman — Daemonless Docker alternative
  • Shell — Runs on host

Request Flow

Client → Coordinator → Broker → Worker → Runtime (Docker/Podman/Shell)
                ↓
            Datastore
  1. Client submits job via REST API
  2. Coordinator stores job in Datastore
  3. Coordinator publishes tasks to Broker
  4. Worker receives tasks from Broker
  5. Worker executes tasks in containers
  6. Worker reports results back via Broker
  7. Coordinator updates job state in Datastore

Modes

ModeCoordinatorWorker
standalone
coordinator
worker

CLI Reference

Commands

twerk server start

Start Twerk in a specific mode.

twerk server start <MODE> [OPTIONS]
ModeDescription
standaloneAll-in-one: Coordinator + Worker
coordinatorAPI server, requires separate workers
workerTask executor, requires coordinator
OptionDescriptionDefault
--hostname <HOSTNAME>Coordinator hostname for workersnone

Config is loaded from TWERK_CONFIG or the default config search paths. There is no --config CLI flag.

twerk migration

Run database migrations.

twerk migration [OPTIONS]
OptionDescription
-y, --yesSkip confirmation prompt

twerk migration reads the datastore type and Postgres DSN from config or TWERK_* environment variables.

twerk health

Check coordinator health.

twerk health [OPTIONS]
OptionDescriptionDefault
-e, --endpoint <URL>Coordinator endpointhttp://localhost:8000

twerk version

Show the current CLI version.

twerk version [--json]

Text-mode version discovery commands are clean endpoints: they print only the version line to stdout, keep stderr empty, and exit 0.

Supported forms:

  • twerk --versiontwerk <VERSION>
  • twerk versiontwerk <VERSION>
  • twerk run --versiontwerk-run <VERSION>
  • twerk migration --versiontwerk-migration <VERSION>
  • twerk health --versiontwerk-health <VERSION>

Top-Level Flags

OptionDescription
--jsonEmit machine-readable JSON for help, version, parse errors, and command failures on stdout
--helpShow CLI help
--versionShow the current version

JSON Behavior

  • Help discovery commands such as twerk --json, twerk --json --help, twerk help --json, and twerk run --json --help return JSON with a rendered content field.
  • Version discovery commands such as twerk --json --version and twerk version --json return JSON on stdout, keep stderr empty, and exit 0.
  • JSON parse failures keep Clap exit code 2 and write structured error JSON to stdout.
  • JSON command validation and runtime failures exit 1, write structured error JSON to stdout, and keep stderr empty.

Environment Variables

All configuration can be set via environment variables:

TWERK_<SECTION>_<KEY>=value

Examples:

  • TWERK_LOGGING_LEVEL=debug
  • TWERK_BROKER_TYPE=rabbitmq
  • TWERK_DATASTORE_TYPE=postgres
  • TWERK_RUNTIME_TYPE=docker

See Configuration for full reference.

YAML Language Spec

This document defines the YAML shapes that Twerk currently accepts at the parser boundary, using shipped examples and core Rust schema types as evidence.

Scope

Twerk accepts two distinct YAML document families:

  1. Native job documents — parsed as twerk_core::job::Job
  2. ASL-style state machines — parsed as twerk_core::asl::machine::StateMachine

Do not treat them as the same grammar. They live side-by-side in examples/, but they are different schemas.

Parser Contract

At the HTTP/parser boundary (crates/twerk-web/src/api/yaml.rs):

  • Empty bodies are rejected.
  • Bodies larger than 512 KiB are rejected.
  • NUL bytes are rejected.
  • Duplicate YAML keys are rejected.
  • Parser budgets are enforced with max_depth = 64 and max_nodes = 10_000.
  • YAML must be valid UTF-8.

Native Job Document

Backed by crates/twerk-core/src/job.rs and crates/twerk-core/src/task.rs.

Minimal shape

Example-backed top-level fields include:

  • name
  • description
  • inputs
  • output
  • tasks

Examples:

  • examples/hello.yaml
  • examples/hello-shell.yaml
  • examples/bash-pipeline.yaml
  • examples/split_and_stitch.yaml

Tasks

Common task fields evidenced in shipped examples:

  • name
  • var
  • image
  • run
  • entrypoint
  • env
  • files
  • retry
  • timeout
  • pre
  • post
  • mounts
  • parallel
  • each
  • subjob

Examples:

  • simple task: examples/hello.yaml
  • retry: examples/retry.yaml, examples/bash-retry.yaml
  • timeout: examples/timeout.yaml
  • map-heavy task: examples/split_and_stitch.yaml

Maps

inputs

Top-level string map consumed by expressions like {{ inputs.key }}.

Evidence:

  • examples/split_and_stitch.yaml

env

Task-level string map. Appears on normal tasks and nested tasks.

Evidence:

  • examples/each.yaml
  • examples/bash-each.yaml
  • examples/split_and_stitch.yaml

files

Task-level string map from filename to inline file body.

Evidence:

  • examples/split_and_stitch.yaml

Not example-backed yet

The core schema supports additional top-level maps/collections, but shipped examples/*.yaml do not currently prove the user-authored syntax of all of them:

  • secrets
  • tags
  • webhooks
  • schedule
  • defaults

They may exist in code and broader docs, but they are not all evidenced by shipped example YAML.

Control Structures

each

each contains:

  • list
  • optional var
  • nested task

Example-backed iteration placeholder forms currently seen in examples include legacy underscore aliases:

  • item_index
  • item_value
  • myitem_index
  • myitem_value
  • num_index
  • num_value
  • item_value_start
  • item_value_length

Evidence:

  • examples/each.yaml
  • examples/bash-each.yaml
  • examples/split_and_stitch.yaml

parallel

parallel contains a nested tasks list.

Evidence:

  • examples/parallel.yaml
  • examples/pokemon-benchmark.yaml
  • examples/twerk-massive-parallel.yaml
  • examples/subjob.yaml
  • examples/bash-subjob.yaml

subjob

subjob embeds another native-job-like task list and may include name and output.

Evidence:

  • examples/subjob.yaml
  • examples/bash-subjob.yaml

Retry and Timeout

Native task retry shape evidenced in examples:

retry:
  limit: <integer>

Timeouts are evidenced as duration strings:

  • 5s
  • 120s

Evidence:

  • examples/retry.yaml
  • examples/bash-retry.yaml
  • examples/timeout.yaml
  • examples/split_and_stitch.yaml

ASL-Style State Machines

Backed by crates/twerk-core/src/asl/.

Top-level ASL fields evidenced in shipped examples:

  • comment
  • startAt
  • states

Evidence:

  • examples/asl-hello.yaml
  • examples/asl-task-retry.yaml

ASL state forms currently evidenced by examples

  • type: pass
  • type: task
  • next
  • end: true
  • task-state retry list with:
    • errorEquals
    • intervalSeconds
    • maxAttempts
    • backoffRate

Important Gaps and Ambiguities

run interpolation docs are inconsistent

website/src/examples.md says run is passed raw and is not evaluated. But shipped examples include run values containing {{ ... }} expressions. This spec does not claim runtime interpolation semantics beyond what the parser accepts: run is parsed as a string, and examples prove that strings containing template markers are accepted.

Mixed example families

examples/ contains both native job YAML and ASL YAML. Tooling and tests must parse them into the correct target type. Parsing every example as Job is not a valid contract.

Example-backed vs code-backed

This spec is intentionally conservative. If a shape is supported in code but not evidenced by shipped examples, call that out explicitly instead of bluffing.

  1. website/src/QUICKSTART_YAML.md for a quick tour
  2. this document for parser-backed shape constraints
  3. website/src/examples.md for usage-oriented examples

Jobs

A job is a collection of tasks executed in order.

Minimal Example

name: my job
tasks:
  - name: hello
    image: alpine:latest
    run: echo hello

Complete Job Reference

# ─── Identification ───────────────────────────────────────────────────────────
name: my job                           # Job name
description: Optional description       # Job description
tags: [tag1, tag2]                    # Metadata tags

# ─── Input & Secrets ─────────────────────────────────────────────────────────
inputs:                               # Non-sensitive inputs
  key: value
secrets:                              # Sensitive values (auto-redacted)
  api_key: secret123

# ─── Task Defaults ────────────────────────────────────────────────────────────
defaults:                             # Applied to all tasks
  retry:
    limit: 3                         # Max retry attempts
  limits:
    cpus: "1"                       # CPU limit
    memory: "512m"                  # Memory limit
  timeout: 10m                      # Task timeout
  queue: default                      # Queue name
  priority: 5                        # 0-9, lower values run first

# ─── Tasks ───────────────────────────────────────────────────────────────────
tasks:
  - name: first task
    image: alpine:latest
    run: echo hello

# ─── Scheduling ───────────────────────────────────────────────────────────────
schedule:
  cron: "0 2 * * *"                 # Cron expression

# ─── Notifications ───────────────────────────────────────────────────────────
webhooks:
  - url: https://example.com/hook
    event: job.StateChange           # or task.StateChange
    if: "{{ job.state == 'COMPLETED' }}"  # Conditional

# ─── Access Control ───────────────────────────────────────────────────────────
permissions:
  - role: admin                      # Or: user: username
  - user: someuser

# ─── Cleanup ──────────────────────────────────────────────────────────────────
autoDelete:
  after: 24h                         # Delete after completion

Job States

StateDescription
PENDINGCreated, not yet scheduled
SCHEDULEDTasks queued
RUNNINGExecuting
COMPLETEDAll tasks finished
FAILEDTask failed
CANCELLEDManually cancelled

Cron Syntax

┌───────────── minute (0-59)
│ ┌───────────── hour (0-23)
│ │ ┌───────────── day of month (1-31)
│ │ │ ┌───────────── month (1-12)
│ │ │ │ ┌───────────── day of week (0-6)
│ │ │ │ │
* * * * *

Examples:

  • 0 * * * * — Every hour
  • 0 2 * * * — Daily at 2 AM
  • 0/5 * * * * — Every 5 minutes
  • 0 0 * * 0 — Weekly on Sunday

Next Steps

Tasks

Tasks are the unit of execution in Twerk.

Minimal Task

- name: my task
  image: alpine:latest
  run: echo hello

What Fields Support Expressions?

Expressions using {{ }} syntax are supported in these fields:

  • name — Task name
  • image — Container image
  • var — Output variable name
  • queue — Target queue
  • if — Conditional execution
  • env values — Environment variables
  • files keys/values — Files to create

Note: The run field is NOT evaluated — it’s passed as raw shell script.

Complete Task Reference

# ─── Identification ───────────────────────────────────────────────────────────
name: my task                        # Supports {{ }} expressions
description: Optional description    # Plain text only

# ─── Container ───────────────────────────────────────────────────────────────
image: ubuntu:mantic                 # Supports {{ }} expressions
cmd: ["/bin/sh", "-c"]             # Override entrypoint
entrypoint: ["/bin/sh", "-c"]       # Same as cmd
run: |
  echo hello                        # RAW shell script - NO expression evaluation

# ─── Environment ──────────────────────────────────────────────────────────────
env:                                 # Values support {{ }} expressions
  KEY: value
  TEMPLATE: '{{ inputs.key }}'      # ✓ Works

files:                               # Keys and values support {{ }}
  config.json: '{"key": "{{ inputs.value }}"}'  # ✓ Works

# ─── Output ─────────────────────────────────────────────────────────────────
var: output_key                      # Supports {{ }} - store task output
                                     # Access via {{ tasks.output_key }}

# ─── Conditions ──────────────────────────────────────────────────────────────
if: "{{ job.state == 'SCHEDULED' }}"  # ✓ Works in if field

# ─── Routing ────────────────────────────────────────────────────────────────
queue: default                       # Supports {{ }} expressions
priority: 5

# ─── Execution Control ───────────────────────────────────────────────────────
timeout: 5m
retry:
  limit: 3

# ─── Resources ────────────────────────────────────────────────────────────────
limits:
  cpus: "0.5"
  memory: "256m"

gpus: all
workdir: /app

# ─── Mounts ──────────────────────────────────────────────────────────────────
mounts:
  - type: volume
    target: /data

# ─── Pre/Post Tasks ─────────────────────────────────────────────────────────
pre:
  - name: setup
    image: alpine:latest
    run: echo setup

post:
  - name: cleanup
    image: alpine:latest
    run: echo cleanup

# ─── Parallel ────────────────────────────────────────────────────────────────
parallel:
  tasks:
    - name: a
      image: alpine:latest
      run: echo A
    - name: b
      image: alpine:latest
      run: echo B

# ─── Each (Loop) ────────────────────────────────────────────────────────────
each:
  list: '{{ fromJSON(inputs.items) }}'  # Expression for list
  concurrency: 2
  task:
    image: alpine:latest
    env:
      VALUE: '{{ item.value }}'        # ✓ Works in each tasks
      INDEX: '{{ item.index }}'
    run: echo $VALUE

Supported Expression Syntax

Input/Secret References

env:
  VALUE: '{{ inputs.my_input }}'      # Job input
  SECRET: '{{ secrets.my_secret }}'   # Job secret (auto-redacted)

Each Loop Variables

each:
  task:
    env:
      VALUE: '{{ item.value }}'        # Current item value
      INDEX: '{{ item.index }}'       # Current index (0-based)

Built-in Functions

env:
  JSON: '{{ fromJSON(inputs.json_string) }}'
  SEQ: '{{ sequence(1, 5) }}'         # [1, 2, 3, 4]
  LEN: '{{ len(tasks.results) }}'
  SPLIT: '{{ split("a,b,c", ",") }}'  # ["a", "b", "c"]

Conditional with if

if: "{{ job.state == 'SCHEDULED' }}"  # Job must be scheduled
if: "{{ job.state != 'FAILED' }}"      # Job not failed

Task States

StateDescription
CREATEDTask created
PENDINGQueued for execution
SCHEDULEDAssigned to worker
RUNNINGExecuting
COMPLETEDFinished successfully
FAILEDFailed
CANCELLEDCancelled
STOPPEDStopped
SKIPPEDSkipped (conditional)

Next Steps

Runtimes

Twerk supports multiple execution environments for tasks.

Docker (Default)

Tasks run in isolated Docker containers using the bollard crate.

[runtime]
type = "docker"

Or via environment:

TWERK_RUNTIME_TYPE=docker

Docker-specific options:

[runtime.docker]
config = ""              # Path to Docker config
privileged = false        # Privileged container mode
image.ttl = "24h"        # Image cache TTL

Podman

Daemonless Docker alternative:

[runtime]
type = "podman"
TWERK_RUNTIME_TYPE=podman

Podman-specific options:

[runtime.podman]
privileged = false
host.network = false     # Use host network

Shell

Run directly on the host (for development/testing):

[runtime]
type = "shell"
TWERK_RUNTIME_TYPE=shell

Warning: Shell runtime executes arbitrary code on the host. Use only in trusted environments.

Shell-specific options:

[runtime.shell]
cmd = ["bash", "-c"]     # Shell command
uid = "1000"             # Run as specific user
gid = "1000"            # Run with specific group

Environment Variables in Tasks

VariableDescription
TWERK_OUTPUTWrite task output here
TWERK_TASK_IDCurrent task ID
TWERK_JOB_IDCurrent job ID

Next Steps

Configuration

Twerk reads TOML configuration from files and environment variables.

Config File Locations

Twerk checks these locations in order:

  1. ./config.local.toml
  2. ./config.toml
  3. ./config.local.yaml
  4. ./config.yaml
  5. ./config.local.yml
  6. ./config.yml
  7. ~/twerk/config.toml
  8. ~/twerk/config.yaml
  9. /etc/twerk/config.toml
  10. /etc/twerk/config.yaml

.yaml and .yml filenames are legacy compatibility names only. Their contents must still be valid TOML.

Or specify a file directly:

TWERK_CONFIG=/path/to/config.toml twerk server start standalone

Environment Variables

Override any setting with:

TWERK_<SECTION>_<KEY>=value

Example: TWERK_LOGGING_LEVEL=debug

Local Standalone Example

[logging]
level = "info"
format = "pretty"

[broker]
type = "inmemory"

[datastore]
type = "inmemory"

[coordinator]
address = "localhost:8000"

[worker]
address = "localhost:8001"

[runtime]
type = "shell"

[runtime.shell]
cmd = ["bash", "-c"]
uid = ""
gid = ""

This is the same shape as the repo-root config.toml used for the primary local docs journey.

Distributed Example

[broker]
type = "rabbitmq"

[broker.rabbitmq]
url = "amqp://guest:guest@localhost:5672/"

[datastore]
type = "postgres"

[datastore.postgres]
dsn = "host=localhost user=twerk password=twerk dbname=twerk port=5432 sslmode=disable"

[runtime]
type = "docker"

Use docker or podman when your task definitions include container images.

For a fuller reference, see configs/sample.config.toml in the repository.

Environment Variable Reference

ConfigEnvironment Variable
logging.levelTWERK_LOGGING_LEVEL
logging.formatTWERK_LOGGING_FORMAT
broker.typeTWERK_BROKER_TYPE
broker.rabbitmq.urlTWERK_BROKER_RABBITMQ_URL
datastore.typeTWERK_DATASTORE_TYPE
datastore.postgres.dsnTWERK_DATASTORE_POSTGRES_DSN
runtime.typeTWERK_RUNTIME_TYPE
coordinator.addressTWERK_COORDINATOR_ADDRESS
worker.addressTWERK_WORKER_ADDRESS

Next Steps

REST API

Base URL: http://localhost:8000

Health

GET /health
{ "status": "UP", "version": "0.1.0" }

Jobs

MethodPathDescription
POST/jobsSubmit a job
POST/jobs?wait=trueSubmit and block until completion
GET/jobsList jobs
GET/jobs/{id}Get job details
GET/jobs/{id}/logFetch job logs
POST/jobs/{id}/cancelCancel a job
PUT/jobs/{id}/cancelCancel a job
PUT/jobs/{id}/restartRestart a job

Example request body:

name: hello shell
tasks:
  - name: hello
    run: echo "hello from twerk"

Tasks

MethodPathDescription
GET/tasks/{id}Get task details
GET/tasks/{id}/logFetch task logs

Scheduled Jobs

MethodPathDescription
POST/scheduled-jobsCreate a scheduled job
GET/scheduled-jobsList scheduled jobs
GET/scheduled-jobs/{id}Get a scheduled job
PUT/scheduled-jobs/{id}/pausePause a scheduled job
PUT/scheduled-jobs/{id}/resumeResume a scheduled job
DELETE/scheduled-jobs/{id}Delete a scheduled job

Queues

MethodPathDescription
GET/queuesList queues
GET/queues/{name}Get queue details
DELETE/queues/{name}Delete a queue

System

MethodPathDescription
GET/nodesList nodes
GET/nodes/{id}Get node details
GET/metricsFetch metrics
POST/usersCreate a user

Triggers

MethodPathDescription
GET/api/v1/triggersList triggers
POST/api/v1/triggersCreate a trigger
GET/api/v1/triggers/{id}Get a trigger
PUT/api/v1/triggers/{id}Update a trigger
DELETE/api/v1/triggers/{id}Delete a trigger

OpenAPI

GET /openapi.json

CLI Integration

# Submit and wait for completion
curl -X POST 'http://localhost:8000/jobs?wait=true' \
  -H "Content-Type: text/yaml" \
  --data-binary @examples/hello-shell.yaml

# Inspect the resulting job and logs
curl http://localhost:8000/jobs/$JOB_ID
curl http://localhost:8000/jobs/$JOB_ID/log

Next Steps

Examples

Real-world workflow examples.

Simple Job

name: hello world
tasks:
  - name: say hello
    image: ubuntu:mantic
    run: echo hello world

Using Inputs

Inputs can be used in env values and other fields (but NOT in run):

name: input example
inputs:
  message: hello world
  count: 5
tasks:
  - name: use inputs
    image: alpine:latest
    env:
      MESSAGE: '{{ inputs.message }}'
      COUNT: '{{ inputs.count }}'
    run: |
      for i in $(seq 1 $COUNT); do
        echo "$MESSAGE"
      done

Each Task (Loop)

Use each to run a task for each item in a list:

name: process items
inputs:
  items: '[1, 2, 3, 4, 5]'
tasks:
  - name: process each
    each:
      list: '{{ fromJSON(inputs.items) }}'
      concurrency: 2
      task:
        image: alpine:latest
        env:
          ITEM: '{{ item.value }}'
          INDEX: '{{ item.index }}'
        run: echo "Item $ITEM at index $INDEX"

Parallel Tasks

Run multiple tasks concurrently:

name: parallel work
tasks:
  - name: parallel parent
    parallel:
      tasks:
        - name: task a
          image: alpine:latest
          run: echo "A done"
        - name: task b
          image: alpine:latest
          run: sleep 2 && echo "B done"
        - name: task c
          image: alpine:latest
          run: echo "C done"

Conditional Execution

Use if to conditionally run tasks:

name: conditional workflow
inputs:
  environment: production
tasks:
  - name: deploy
    if: "{{ job.state == 'SCHEDULED' }}"
    image: alpine:latest
    run: echo "Deploying..."

Retry on Failure

name: with retry
tasks:
  - name: may fail
    retry:
      limit: 3
    image: alpine:latest
    run: ./might-fail.sh

Resource Limits

name: limited task
tasks:
  - name: constrained
    limits:
      cpus: "0.5"
      memory: "256m"
    image: alpine:latest
    run: echo hello

Mounts

Share data between pre/post tasks and main task:

name: with mounts
tasks:
  - name: process
    image: jrottenberg/ffmpeg:3.4-alpine
    run: ffmpeg -i /tmp/input.mov /tmp/output.mp4
    mounts:
      - type: volume
        target: /tmp
    pre:
      - name: download
        image: alpine:latest
        run: wget http://example.com/video.mov -O /tmp/input.mov

Scheduled Job

name: daily backup
schedule:
  cron: "0 2 * * *"
tasks:
  - name: backup
    image: postgres:15
    run: pg_dump -a mydb > /backups/dump.sql

Environment Variable Reference

VariableDescription
TWERK_OUTPUTWrite task output here

Supported Expressions

Works in env values, image, queue, name, var, if:

  • {{ inputs.key }} — Job inputs
  • {{ secrets.key }} — Job secrets
  • {{ item.value }} — Each loop item value
  • {{ item.index }} — Each loop item index

Built-in functions:

  • fromJSON(string) — Parse JSON string
  • toJSON(value) — Convert to JSON
  • sequence(start, stop) — Generate integer range
  • split(string, delimiter) — Split string
  • len(array) — Array length
  • contains(array, item) — Check membership

Works in if condition:

  • job.state — Current job state
  • job.id — Job ID
  • job.name — Job name
  • task.state — Current task state
  • task.id — Task ID

Note: The run field is NOT evaluated — it’s passed as raw shell script to the container.

See Also