Composing the three core Concepts together results in a pipeline.
A pipeline is the result of configuring
resources. When you configure a pipeline, it takes on a
life of its own, to continuously detect resource versions and schedule jobs over
time. The mechanics of this are described below.
Resources are at the core of Concourse and are used to model all external dependencies that enter or exit your pipeline. This lets Concourse ensure that builds are repeatable and recorded.
Resources are also the only way to convey progression of artifacts through a pipeline. The order that jobs execute is determined by their dependencies on resources that made it through upstream jobs. This means you never actually configure "run job A and then job B" - instead you express the dependency between them via the data that's progressing through as a resource.
There are three core mechanics of a pipeline: detecting versions of its resources, determining the valid sets of versions for its jobs, and optionally auto-triggering jobs when new candidates appear.
New resource versions are discovered by a pipeline in two ways:
A build can produce a new version of a resource by running a
putstep. This version is automatically detected and becomes available in later build steps and later pipeline jobs.
Both of these combined produce a stream of versions for a particular resource.
passedconstraint on the step itself will make sure that only versions that have passed a set of jobs will be considered for inputs. This means that they were either an input to a successful build of that job or they were a version that was created (via
put) by a successful build of that job.
Versions of resources can be manually disabled if you know that a version is "bad" (i.e. backwards-incompatible) in a way that your pipeline itself did not discover.
The version to fetch may be pinned to a specific version or instructed to pick the next available one by configuring
versionon the step.
Putting these constraints together, the following illustration shows how
the versions for a job's
get step are determined:
- Version Stream
- Valid Versions
Note that with multiple
get steps in a job's
build plan, their
will be correlated, and only sets of versions that satisfy all constraints
at once will be considered. The semantics here are nuanced but should
quickly become pretty intuitive; see the docs on
Jobs can either be manually triggered, by clicking the "+" sign on the build or job page, or automatically triggered when its dependencies change.
By default, jobs won't automatically run when a new set of candidate
versions becomes available. This behaviour can be changed by adding
trigger: true to any of the
get steps in your plan. This will make it so that the when
a new set of candidates is discovered, and the version for the
step is new, then a build will be scheduled automatically. This is
typically used throughout the jobs in a pipeline.
Note that only one build will be automatically scheduled at a time, and the input set is not determined until as late as possible (just before the build starts). This keeps builds from piling up as jobs with varying duration run throughout your pipeline. This also means that if you manually trigger a job you can trust that a redundant build won't be enqueued later (barring the fundamental race of a human competing with the automatic trigger).
Sometimes your house just starts to burn down. It's no one's fault, but you want it to stop. In Concourse, there are various levels of "panic buttons" that let you deal with emergent behavior such as backwards incompatibility or your pipeline becoming sentient.
A resource check can be paused on its resource page. This has the effect that new versions of the resource will not be found by the check component of the resource. However, resource versions that were created in the pipeline will still propagate.
These two features are useful to stop broken versions of upstream resources entering your pipeline or to remove them if they have already been ingested. Broken versions can be disabled and the resource can be paused until the upstream product is fixed or until your project adapts to the new upstream dependency.
Versions can be explicitly invalidated by disabling them on their resource page. Disabling a resource will cause it to never be used as an input to a build. If the only otherwise valid input versions now include a disabled version then a build will not run.
This feature is useful to remove broken versions from being valid inputs to your pipeline. For example, if an upstream team unintentionally broke an external interface to their service and then fixed it in the next version then you could just disable the broken version.
Jobs can be paused to stop them running even if there are new resource versions that would otherwise cause them to start. A build will be queued up if there are new versions available which will cause the job to pulse grey but it will never start.
This can be useful to observe the results of a previously failed build that affected an external environment before the next run of the job starts and wipes out any evidence of the problem.
An entire pipeline can be paused which has the same effect as pausing every resource and job in that pipeline: no resources will be checked and no new builds will start.
This should be used when you need to stop everything quickly. Normally this is when something bad is about to happen or when you don't care about that pipeline right now.