Each Job has a single build plan. When a build of a job is created, the plan determines what happens.
A new build of the job is scheduled whenever
get steps with
trigger: true have new versions available.
To visualize the job in the pipeline, resources that appear as
steps are drawn as inputs, and resources that appear in
appear as outputs.
A simple unit test job may look something like:
name: banana-unit plan: - get: banana trigger: true - task: unit file: banana/task.yml
When new versions of
banana are detected, a new build of
banana-unit will be scheduled, because we've set
Jobs can depend on resources that are produced by or pass through upstream
jobs, by configuring
passed: [job-a, job-b] on the
Putting these pieces together, if we were to propagate
the above example into an integration suite with another
component (pretending we also defined its
apple-unit job), the
configuration for the integration job may look something like:
name: fruit-basket-integration plan: - aggregate: - get: banana trigger: true passed: [banana-unit] - get: apple trigger: true passed: [apple-unit] - get: integration-suite trigger: true - task: integration file: integration-suite/task.yml
Note the use of the
aggregate step to
collect multiple inputs at once.
With this example we've configured a tiny pipeline that will automatically run unit tests for two components, and continuously run integration tests against whichever versions pass both unit tests.
This can be further chained into later "stages" of your pipeline; for
example, you may want to continuously deliver an artifact built from
whichever components pass
To push artifacts, you would use a
that targets the destination resource. For example:
name: deliver-food plan: - aggregate: - get: banana trigger: true passed: [fruit-basket-integration] - get: apple trigger: true passed: [fruit-basket-integration] - get: baggy trigger: true - task: shrink-wrap file: baggy/shrink-wrap.yml - put: bagged-food params: bag: shrink-wrap/bagged.tgz
This presumes that there's a
resource defined, which understands that the
bag parameter points to a file to ship up to the resource's location.
Note that both
apple list the same job as an
upstream dependency. This guarantees that
deliver-food will only
trigger when a version of both of these dependencies pass through the same
build of the integration job (and transitively, their individual unit jobs).
This prevents bad apples or bruised bananas from being delivered. (I'm sorry.)
For a reference on each type of step, read on.
get: fetch a resource
put: update a resource
task: execute a task
aggregate: run steps in parallel
do: run steps in series
on_success: perform a step when another succeeds
on_failure: perform a step when another fails
ensure: guarantee execution of a step
tags: perform a step on certain workers
try: permit failure of a step
timeout: enforce a time limit on a step
attempts: retry a step