Pipelines

Composing the three core Concepts together results in a pipeline.

A pipeline is the result of configuring jobs and connecting them to resources. When you configure a pipeline, it takes on a life of its own, to continuously detect resource versions and schedule jobs over time. The mechanics of this are described below.

Scheduling with Resources

Resources are at the core of Concourse and are used to model all external dependencies that enter or exit your pipeline. This lets Concourse ensure that builds are repeatable and recorded.

Resources are also the only way to convey progression of artifacts through a pipeline. The order that jobs execute is determined by their dependencies on resources that made it through upstream jobs. This means you never actually configure "run job A and then job B" - instead you express the dependency between them via the data that's progressing through as a resource.

There are three core mechanics of a pipeline: detecting versions of its resources, determining the valid sets of versions for its jobs, and optionally auto-triggering jobs when new candidates appear.

Collecting Versions

Each pipeline automatically collects versions of each resource in chronological order. An example of versions of resources would be commits to a Git repository, via the Git resource.

New resource versions are discovered by a pipeline in two ways:

  • A new version can be found in an external resource by the check component of a resource finding a new set of versions when it runs. A check is run periodically for every resource in your pipeline.

  • A build can produce a new version of a resource by running a put step. This version is automatically detected and becomes available in later build steps and later pipeline jobs.

Both of these combined produce a stream of versions for a particular resource.

Filtering Versions

When writing a Job, its dependencies are declared via get steps. The set of versions available to a get step is filtered down by two things:

  • A passed constraint on the step itself will make sure that only versions that have passed a set of jobs will be considered for inputs. This means that they were either an input to a successful build of that job or they were a version that was created (via put) by a successful build of that job.

  • Versions of resources can be manually disabled if you know that a version is "bad" (i.e. backwards-incompatible) in a way that your pipeline itself did not discover.

  • The version to fetch may be pinned to a specific version or instructed to pick the next available one by configuring version on the step.

Putting these constraints together, the following illustration shows how the versions for a job's get step are determined:

  • Version Stream
  • 8
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • Valid Versions
  • 8
  • 7
  • 6
  • skipped
  • skipped
  • 3
  • skipped
  • 1

Note that with multiple get steps in a job's build plan, their passed constraints will be correlated, and only sets of versions that satisfy all constraints at once will be considered. The semantics here are nuanced but should quickly become pretty intuitive; see the docs on passed for more information.

Triggering Jobs

Jobs can either be manually triggered, by clicking the "+" sign on the build or job page, or automatically triggered when its dependencies change.

By default, jobs won't automatically run when a new set of candidate versions becomes available. This behaviour can be changed by adding trigger: true to any of the get steps in your plan. This will make it so that the when a new set of candidates is discovered, and the version for the get step is new, then a build will be scheduled automatically. This is typically used throughout the jobs in a pipeline.

Note that only one build will be automatically scheduled at a time, and the input set is not determined until as late as possible (just before the build starts). This keeps builds from piling up as jobs with varying duration run throughout your pipeline. This also means that if you manually trigger a job you can trust that a redundant build won't be enqueued later (barring the fundamental race of a human competing with the automatic trigger).

Applying the Brakes

Sometimes your house just starts to burn down. It's no one's fault, but you want it to stop. In Concourse, there are various levels of "panic buttons" that let you deal with emergent behavior such as backwards incompatibility or your pipeline becoming sentient.

Pausing Resources

A resource check can be paused on its resource page. This has the effect that new versions of the resource will not be found by the check component of the resource. However, resource versions that were created in the pipeline will still propagate.

These two features are useful to stop broken versions of upstream resources entering your pipeline or to remove them if they have already been ingested. Broken versions can be disabled and the resource can be paused until the upstream product is fixed or until your project adapts to the new upstream dependency.

Disabling a Resource Version

Versions can be explicitly invalidated by disabling them on their resource page. Disabling a resource will cause it to never be used as an input to a build. If the only otherwise valid input versions now include a disabled version then a build will not run.

This feature is useful to remove broken versions from being valid inputs to your pipeline. For example, if an upstream team unintentionally broke an external interface to their service and then fixed it in the next version then you could just disable the broken version.

Pausing Jobs

Jobs can be paused to stop them running even if there are new resource versions that would otherwise cause them to start. A build will be queued up if there are new versions available which will cause the job to pulse grey but it will never start.

This can be useful to observe the results of a previously failed build that affected an external environment before the next run of the job starts and wipes out any evidence of the problem.

Pausing Pipelines

An entire pipeline can be paused which has the same effect as pausing every resource and job in that pipeline: no resources will be checked and no new builds will start.

This should be used when you need to stop everything quickly. Normally this is when something bad is about to happen or when you don't care about that pipeline right now.