There are countless CI options out there. Transitioning from one CI system to another can be a huge investment depending on the size of your project.

Part of this risk stems from projects being coupled to the intricacies of their existing CI software, and learning the intricacies of the next one. Or it's simply the number of potential variables to accidentally change when switching so many builds over, manually clicking around in the new system's wizard-like UI. To see how Concourse compares to other tools, check out Concourse vs..

Concourse's principles reduce the risk of switching to and from Concourse, by encouraging practices that decouple your project from your CI's little details, and keeping all configuration in declarative files that can be checked into version control. To get started, first learn the core concepts, and see how they tie together to define the mechanics of a pipeline.

Once you're ready, you can start installing and using Concourse in just a few minutes!

Concourse vs.

Concourse was dreamt up and built while working on the Cloud Foundry project and experiencing the interesting CI and CD challenges that it has. Cloud Foundry projects and their tests needed to run on different architectures, different platforms, against varying underlying IaaSs before being tested that they could be deployed fresh while also remaining compatible with upgrades from previous versions. Finally, these projects need to be packaged up in various ways to create multiple derivatives end products.

We went through various different generations of our CI infrastructure and were never completely happy with one (or in some cases many) aspects of each attempt.


We started with Jenkins: the big and ubiquitous player in the CI world. Each team would maintain their own deployment and add builds as required. Standard.

After a while cracks began to show.

  • Jenkins servers become snowflakes. More and more important shell script gets pasted into those little textboxes, plugins get installed, and dependencies are installed on the slaves. It's possible to try and store this state but Jenkins servers are extremely resistant to being declaratively configured.

    Concourse does not allow any configuration to enter the server that cannot be easily saved away in version control. Deleting your Concourse server and bringing it back somewhere else is not and should not be a big deal. Builds run every step in their own containers, with their own dependencies, so workers never need to be configured in the first place.

  • Jenkins has no first class support for pipelines. There are plugins that add some of this functionality but they never seemed to work perfectly. You can set jobs to run before or after others but this becomes error prone and confusing to perform more complex sequence changes.

    Jenkins 2.0 tries to address this by promoting a Pipeline plugin (plus another plugin to visualize it), but it kind of misses the point. It, like the plugins before it, allows you to define a flow of scripts and plugins to execute, but the real heart of a pipeline in the Concourse world is resources, not the fact that it can run jobs one after another.

    Concourse is built on the assumption that you will be building pipelines. However, instead of making you explicitly say the order in which jobs should be run, Concourse just asks that you say which are valid inputs for each of the jobs in your pipeline and uses this to calculate the pipeline itself.

  • Everything you care about in Jenkins is accomplished by plugins. Concourse aims to never have a plugin architecture, and instead determine the generic features and user experience patterns that allow it to remain nimble, stable, and consistent.

    Rather than scouring the earth for a still-maintained plugin to accomplish your goals, Concourse encourages modeling what you need in terms of the few concepts that Concourse requires you to understand.

  • Trying to find the build output log for a failed build in Jenkins can be up to 3 clicks from the home page.

    The Concourse UI is built and designed around making that sort of information as quick to get to as possible. Getting to the most recent build log is a single click.

Travis CI and other Hosted Offerings

We began using Travis CI in order to open up our builds and their results to the open source community. Travis gets a lot of things right. They don't let you have any snowflake configuration outside of your .travis.yml file. Builds are run inside a container to avoid pollution. It also has great support for building your GitHub pull requests. This is something that Concourse cannot yet do but we have plans to allow the same workflow in the future.

Unfortunately it still doesn't have support for pipelines and only very simple builds are possible. It also brings the frustration that if something doesn't pass in CI you normally need to send up lots of little debugging commits to work out why it's behaving differently. Concourse lets you submit your local working directory up to the server to be run in exactly the same manner as CI.


We were excited for the release of GoCD. Finally, someone was bringing pipelines as a first class citizen to CI! CF decided to switch and unify the disparate CI approaches to form a unified pipeline.

Unfortunately there were some problems with some of the GoCD design decisions that, in our opinion, make it very difficult to use.

  • Finding how to configure something is very difficult. There are two distinct sides of the application: watching jobs run and configuring them. You cannot be viewing a job run and easily get to the configuration of the same job. Everything is hidden in endless menus.

    As mentioned above, Concourse optimises for getting to the information you want quickly and keeps all configuration in a separate version-able file.

  • The execution hierarchy is deep and complex. Pipelines have stages which have jobs which have tasks. Some of these run in parallel and some of them run in serial. Artifacts from some are available in others. These concepts make sense eventually but ramping new developers up on how to use it was very difficult.

    What may be multiple pipelines in GoCD often conceptually maps to a single pipeline in Concourse. A Concourse pipeline just has isolated jobs that you can look at without needing to understand the entire pipeline. The flow of resources inside jobs in Concourse is explicit and user-defined. There are no surprises about when or where you'll have a resource available.

  • Another frustration with GoCD was that - by the time it was released publicly - it didn't bring anything new to the world of CI (first-class pipelines non-withstanding (which are great)).

    If you'll indulge us tooting our own horn a second: we feel that Concourse brings some interesting new ideas to the table:

    • Pluggable Resource Interface

    • Running builds in Containers Natively

    • Zero Snowflake-able Configuration

    • Submitting builds from the local file system up to run in CI


Concourse's end goal is to provide an expressive system with as few distinct moving parts as possible.

Concourse limits itself to three core concepts: tasks, resources, and the jobs that compose them. Interesting features like timed triggers and synchronizing usage of external environments are modeled in terms of these, rather than as layers on top.

With these primitives you can model any pipeline, from simple (unit → integration → deploy → ship) to complex (testing on multiple infrastructures, fanning out and in, etc.).

There are no more nooks and crannies of Concourse introduced as your pipeline becomes more involved.


A task is the execution of a script in an isolated environment with dependent resources available to it. For example, running myrepo/scripts/test in a Docker container, with the working directory containing all of the task's dependencies.

If the script exits 0, the task succeeds. Otherwise, it fails.

A task can either be executed by a Job or executed manually with the Fly CLI. Both execute the same configuration, giving the guarantee that locally-executed tasks with Fly are running the same way they would in your pipeline.

To learn more about configuring and running tasks, see Tasks.


A resource is any entity that can be checked for new versions, pulled down at a specific version, and/or pushed up to idempotently create new versions. A common example would be a git repository, but it can also represent more abstract things like time itself.

At its core, Concourse knows nothing about things like git. Instead, it consumes a generic interface implemented by resource types. This allows Concourse to be extended by configuring workers with resource type implementations.

This abstraction is immensely powerful, as it does not limit Concourse to whatever things its authors thought to integrate with. Instead, as a user of Concourse you can just reuse resource type implementations, or implement your own.

To use resources, configure them in your pipeline via the resources section, and use them in your Build Plans via the get and put steps.

See Resource Types to learn more about what types of resources Concourse supports.


At a high level, a job describes some actions to perform when dependent resources change (or when manually triggered). For example, you may define a job that runs your unit tests whenever new code is pushed to a repository.

Jobs can be thought of as functions with inputs and outputs, that automatically run when new inputs are available. A job can depend on the outputs of upstream jobs, which is the root of pipeline functionality.

The definition of actions to perform is done via a Build Plan, which is a very powerful composition-based DSL that can express anything from running simple unit tests to running a matrix of tasks and aggregating the result.


An instance of execution of a job's plan is called a build. A build can either succeed or fail, or error if something unrelated to your code goes wrong (i.e. if one of your workers falls off the face of the earth).

When a build runs, the job's plan is realized. Each step described by the job's plan is executed, and so long as all Tasks succeed, the build succeeds. If a task fails, the build fails, and its resources do not propagate to the rest of the pipeline.

The containers running in a build can be accessed while they're running (and also shortly after they finish) via fly intercept, which can greatly help in debugging.

Mechanics of a Pipeline

Composing the three core Concepts together results in a pipeline.

A pipeline is the result of configuring jobs and resources. When you configure a pipeline, it takes on a life of its own, to continuously detect resource versions and schedule jobs over time. The mechanics of this are described below.

Scheduling with Resources

Resources are at the core of Concourse and are used to model all external dependencies that enter or exit your pipeline. This lets Concourse ensure that builds are repeatable and recorded.

Resources are also the only way to convey progression of artifacts through a pipeline. The order that jobs execute is determined by their dependencies on resources that made it through upstream jobs. This means you never actually configure "run job A and then job B" - instead you express the dependency between them via the data that's progressing through as a resource.

There are three core mechanics of a pipeline: detecting versions of its resources, determining the valid sets of versions for its jobs, and optionally auto-triggering jobs when new candidates appear.

Collecting Versions

Each pipeline automatically collects versions of each resource in chronological order. An example of versions of resources would be commits to a Git repository, via the Git resource.

New resource versions are discovered by a pipeline in two ways:

  • A new version can be found in an external resource by the check component of a resource finding a new set of versions when it runs. A check is run periodically for every resource in your pipeline.

  • A build can produce a new version of a resource by running a put step. This version is automatically detected and becomes available in later build steps and later pipeline jobs.

Both of these combined produce a stream of versions for a particular resource.

Filtering Versions

When writing a Job, its dependencies are declared via get steps. The set of versions available to a get step is filtered down by two things:

  • A passed constraint on the step itself will make sure that only versions that have passed a set of jobs will be considered for inputs. This means that they were either an input to a successful build of that job or they were a version that was created (via put) by a successful build of that job.

  • Versions of resources can be manually disabled if you know that a version is "bad" (i.e. backwards-incompatible) in a way that your pipeline itself did not discover.

  • The version to fetch may be pinned to a specific version or instructed to pick the next available one by configuring version on the step.

Putting these constraints together, the following illustration shows how the versions for a job's get step are determined:

  • Version Stream
  • 8
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • Valid Versions
  • 8
  • 7
  • 6
  • skipped
  • skipped
  • 3
  • skipped
  • 1

Note that with multiple get steps in a job's build plan, their passed constraints will be correlated, and only sets of versions that satisfy all constraints at once will be considered. The semantics here are nuanced but should quickly become pretty intuitive; see the docs on passed for more information.

Triggering Jobs

Jobs can either be manually triggered, by clicking the "+" sign on the build or job page, or automatically triggered when its dependencies change.

By default, jobs won't automatically run when a new set of candidate versions becomes available. This behaviour can be changed by adding trigger: true to any of the get steps in your plan. This will make it so that the when a new set of candidates is discovered, and the version for the get step is new, then a build will be scheduled automatically. This is typically used throughout the jobs in a pipeline.

Note that only one build will be automatically scheduled at a time, and the input set is not determined until as late as possible (just before the build starts). This keeps builds from piling up as jobs with varying duration run throughout your pipeline. This also means that if you manually trigger a job you can trust that a redundant build won't be enqueued later (barring the fundamental race of a human competing with the automatic trigger).

Applying the Brakes

Sometimes your house just starts to burn down. It's no one's fault, but you want it to stop. In Concourse, there are various levels of "panic buttons" that let you deal with emergent behavior such as backwards incompatibility or your pipeline becoming sentient.

Pausing Resources

A resource check can be paused on its resource page. This has the effect that new versions of the resource will not be found by the check component of the resource. However, resource versions that were created in the pipeline will still propagate.

These two features are useful to stop broken versions of upstream resources entering your pipeline or to remove them if they have already been ingested. Broken versions can be disabled and the resource can be paused until the upstream product is fixed or until your project adapts to the new upstream dependency.

Disabling a Resource Version

Versions can be explicitly invalidated by disabling them on their resource page. Disabling a resource will cause it to never be used as an input to a build. If the only otherwise valid input versions now include a disabled version then a build will not run.

This feature is useful to remove broken versions from being valid inputs to your pipeline. For example, if an upstream team unintentionally broke an external interface to their service and then fixed it in the next version then you could just disable the broken version.

Pausing Jobs

Jobs can be paused to stop them running even if there are new resource versions that would otherwise cause them to start. A build will be queued up if there are new versions available which will cause the job to pulse grey but it will never start.

This can be useful to observe the results of a previously failed build that affected an external environment before the next run of the job starts and wipes out any evidence of the problem.

Pausing Pipelines

An entire pipeline can be paused which has the same effect as pausing every resource and job in that pipeline: no resources will be checked and no new builds will start.

This should be used when you need to stop everything quickly. Normally this is when something bad is about to happen or when you don't care about that pipeline right now.

Setting Up

How is Concourse formed? Well, first you'll need to install it. You may also want to skim the architecture to learn how the different components of Concourse interact, which may be important depending on how you're setting things up.

Beyond that you may want to learn more advanced topics like managing worker pools and restricting access and configuring metrics, but if you're just kicking the tires you can probably skip right to Using Concourse immediately after you have things installed.


So you want to get yourself a Concourse.

There are a few supported ways to get going. All distributions of Concourse are equivalent in feature set, the only difference being operational concerns like scaling up your workers, and how quickly you can bootstrap.

For the quickest introduction, you may want to spin up a local VM with Vagrant. This should take only a few minutes, depending on how quickly you can download the image.

Once you're ready to install Concourse somewhere more proper, have a look at the standalone binary or the Docker repository.

Or, if you want to invest a bit in spinning up a fully automated cluster using a tool that meshes well with Concourse's ideals, buckle up and head over to Clusters with BOSH.

Local VM with Vagrant

The quickest way to spin up a fully functioning Concourse is with Vagrant.

Just run the following in any directory:

vagrant init concourse/lite # creates ./Vagrantfile
vagrant up                  # downloads the box and spins up the VM

The web server will be running at

While this isn't exactly production ready, it may be enough depending on your project's needs. Given that Concourse is stateless, you can always hoist your pipeline onto a bigger installation when you're ready, so there's little risk in sticking with the Vagrant boxes while you figure things out.


Note that upgrading Concourse in Vagrant unfortunately requires re-creating the machine, which will wipe out all of your submitted pipelines, build history, etc. If Vagrant gives a warning of A newer version of the box 'concourse/lite' is available!, run the following:

vagrant box update --box concourse/lite # gets the newest Vagrant box
vagrant destroy                         # remove the old Vagrant box
vagrant up                              # re-create the machine with the newer box

Standalone Binary

At some point you may want to start putting Concourse on to real hardware. A binary distribution is available in the downloads section.

The binary is fairly self-contained, making it ideal for tossing onto a VM by hand or orchestrating it with Docker, Chef, or other ops tooling.


  • Grab the appropriate binary for your platform from the downloads section.

  • For Linux you'll need kernel v3.19 or later, with user namespace support enabled. Windows and Darwin don't really need anything special.

  • PostgresSQL 9.3+

Generating Keys

To run Concourse securely you'll need to generate 3 private keys (well, 2, plus 1 for each worker):

session_signing_key (currently must be RSA)

Used for signing user session tokens, and by the TSA to sign its own tokens in the requests it makes to the ATC.


Used for the TSA's SSH server. This is the key whose fingerprint you see when the ssh command warns you when connecting to a host it hasn't seen before.

worker_key (one per worker)

Used for authorizing worker registration. There can actually be an arbitrary number of these keys; they are just listed to authorize worker SSH access.

To generate these keys, run:

ssh-keygen -t rsa -f tsa_host_key -N ''
ssh-keygen -t rsa -f worker_key -N ''
ssh-keygen -t rsa -f session_signing_key -N ''

...and we'll also start on an authorized_keys file, currently listing this initial worker key:

cp authorized_worker_keys

Starting the Web UI & Scheduler

The concourse binary embeds the ATC and TSA components, available as the web subcommand.

The ATC is the component responsible for scheduling builds, and also serves as the web UI and API.

The TSA provides a SSH interface for securely registering workers, even if they live in their own private network.

Single node, local Postgres

The following command will spin up the ATC, listening on port 8080, with some basic auth configured, and a TSA listening on port 2222.

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \

This assumes you have a local Postgres server running on the default port (5432) with an atc database, accessible by the current user. If your database lives elsewhere, just specify the --postgres-data-source flag, which is also demonstrated below.

Be sure to replace the --external-url flag with the URI you expect to use to reach your Concourse server.

In the above example we've configured basic auth for the main team. For further configuration see Configuring Auth.

Cluster with remote Postgres

The ATC can be scaled up for high availability, and they'll also roughly share their scheduling workloads, using the database to synchronize.

The TSA can also be scaled up, and requires no database as there's no state to synchronize (it just talks to the ATC).

A typical configuration with multiple ATC+TSA nodes would have them sitting behind a load balancer, forwarding port 80 to 8080, 443 to 4443 (if you've enabled TLS), and 2222 to 2222.

To run multiple web nodes, you'll need to pass the following flags:

  • --postgres-data-source should all refer to the same database

  • --peer-url should be a URL used to reach the individual ATC, from other ATCs, i.e. a URL usable within their private network

  • --external-url should be the URL used to reach any ATC, i.e. the URL to your load balancer

For example:

Node 0:

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --postgres-data-source postgres://user:pass@ \
  --external-url \

Node 1 (only difference is --peer-url):

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --postgres-data-source postgres://user:pass@ \
  --external-url \

Starting Workers

Workers are Garden servers, continuously heartbeating their presence to the Concourse API. Workers have a statically configured platform and a set of tags, both of which determine where steps in a Build Plan are scheduled.

Linux workers come with a set of base resource types. If you are planning to use them you need to have at least one Linux worker.

You may want a few workers, depending on the resource usage of your pipeline. There should be one per machine; running multiple on one box doesn't really make sense, as each worker runs as many containers as Concourse requests of it.

To spin up a worker and register it with your Concourse cluster running locally, run:

sudo concourse worker \
  --work-dir /opt/concourse/worker \
  --tsa-host \
  --tsa-public-key \
  --tsa-worker-private-key worker_key

Note that the worker must be run as root, as it orchestrates containers.

The --work-dir flag specifies where container data should be placed; make sure it has plenty of disk space available, as it's where all the disk usage across your builds and resources will end up.

The --tsa-host refers to wherever your TSA node is listening, by default on port 2222 (pass --tsa-port if you've configured it differently). This may be an address to a load balancer if you're running multiple web nodes, or just an IP, perhaps if you're running everything on one box.

The --tsa-public-key flag is used to ensure we're connecting to the TSA we should be connecting to, and is used like known_hosts with the ssh command. Refer to Generating Keys if you're not sure what this means.

The --tsa-worker-private-key flag specifies the key to use when authenticating to the TSA. Refer to Generating Keys if you're not sure what this means.

Docker Repository

The standalone binary is available in an official Docker repository: concourse/concourse. There are tags for each version, including release candidates if you want to live on the bleeding edge.

For a quick start with Docker Compose, create docker-compose.yml:


version: '3'

    image: postgres:9.5
      POSTGRES_DB: concourse
      POSTGRES_USER: concourse
      POSTGRES_PASSWORD: changeme
      PGDATA: /database

    image: concourse/concourse
    links: [concourse-db]
    command: web
    depends_on: [concourse-db]
    ports: ["8080:8080"]
    volumes: ["./keys/web:/concourse-keys"]
    restart: unless-stopped # required so that it retries until conocurse-db comes up
      CONCOURSE_POSTGRES_HOST: concourse-db

    image: concourse/concourse
    privileged: true
    links: [concourse-web]
    depends_on: [concourse-web]
    command: worker
    volumes: ["./keys/worker:/concourse-keys"]
      CONCOURSE_TSA_HOST: concourse-web

Then, run the following to generate the necessary keys:

mkdir -p keys/web keys/worker

ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''

ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''

cp ./keys/worker/ ./keys/web/authorized_worker_keys
cp ./keys/web/ ./keys/worker

The next thing you'll need is an address that can be used to reach the web node from within your network. This is the $CONCOURSE_EXTERNAL_URL variable. It can't be or localhost as it has to also work in a separate network namespace, for execute to be able to receive your uploaded bits from the web node.

If you're using docker-machine, set CONCOURSE_EXTERNAL_URL to whatever the machine's IP is, for example:


Then, spin everything up:

docker-compose up

Next, browse to your configured external URL (probably or and log in with the username concourse and password changeme.

Given that the Docker repository simply wraps the binary, you'll want to reference the binary documentation or just run docker run concourse/concourse <command> --help if you want to learn how to do any further configuration.

Every flag can also be configured via environment variables in the form of CONCOURSE_(flag), where (flag) is the uppercased flag name with all hyphens conveted to underscores. For example, --basic-auth-username would be CONCOURSE_BASIC_AUTH_USERNAME.

In the above example we've configured basic auth for the main team. For further configuration see Configuring Auth.

Clusters with BOSH

Deploying Concourse with BOSH provides a scalable cluster with health management and rolling upgrades.

If you're not yet familiar with BOSH, learning it will be a bit of an investment, but it should pay off in spades. There are a lot of parallels between the philosophy of BOSH and Concourse.

Prepping the Environment

To go from nothing to a BOSH managed Concourse, you'll need to do the following:

You can skip some of this if you already have a working BOSH director running BOSH v255.4 and up.

Deploying Concourse

Once you've got all that set up, download the releases listed for your version of Concourse from the downloads section, and upload them to the BOSH director.

Next you'll need a Concourse BOSH deployment manifest. An example manifest is below; you'll want to replace the REPLACE_ME bits with whatever values are appropriate.

Note that the VM types, VM extensions, persistent disk type, and network names must come from your Cloud Config. Consult Prepping the Environment if you haven't set it up yet. You can retrieve your Cloud Config by running bosh cloud-config.

name: concourse

- name: concourse
  version: latest
- name: garden-runc
  version: latest

- alias: trusty
  os: ubuntu-trusty
  version: latest

- name: web
  instances: 1
  # replace with a VM type from your BOSH Director's cloud config
  vm_type: REPLACE_ME
  # replace with a VM extension from your BOSH Director's cloud config that will attach
  # this instance group to your ELB
  stemcell: trusty
  azs: [z1]
  networks: [{name: private}]
  - name: atc
    release: concourse
      # replace with your CI's externally reachable URL, e.g.
      external_url: REPLACE_ME

      # replace with username/password, or configure GitHub auth
      basic_auth_username: REPLACE_ME
      basic_auth_password: REPLACE_ME

      # replace with your SSL cert and key
      tls_cert: REPLACE_ME
      tls_key: REPLACE_ME

      postgresql_database: &atc_db atc
  - name: tsa
    release: concourse
    properties: {}

- name: db
  instances: 1
  # replace with a VM type from your BOSH Director's cloud config
  vm_type: REPLACE_ME
  stemcell: trusty
  # replace with a disk type from your BOSH Director's cloud config
  persistent_disk_type: REPLACE_ME
  azs: [z1]
  networks: [{name: private}]
  - name: postgresql
    release: concourse
      - name: *atc_db
        # make up a role and password
        role: REPLACE_ME
        password: REPLACE_ME

- name: worker
  instances: 1
  # replace with a VM type from your BOSH Director's cloud config
  vm_type: REPLACE_ME
  # replace with a VM extension from your BOSH Director's cloud config that will attach
  # sufficient ephemeral storage to VMs in this instance group.
  stemcell: trusty
  azs: [z1]
  networks: [{name: private}]
  - name: groundcrew
    release: concourse
    properties: {}
  - name: baggageclaim
    release: concourse
    properties: {}
  - name: garden
    release: garden-runc
        listen_network: tcp

  canaries: 1
  max_in_flight: 1
  serial: false
  canary_watch_time: 1000-60000
  update_watch_time: 1000-60000

You may also want to consult the property descriptions for your version of Concourse at to see what properties you can/should tweak.

You may also want to consult Configuring Auth for configuring something other than basic auth for the main team.

Once you've got a manifest, just deploy it!

Reaching the web UI

This really depends on your infrastructure. If you're deploying to AWS you may want to configure the web VM type to register with an ELB, mapping port 80 to 8080, 443 to 4443 (if you've configured TLS), and 2222 to 2222.

Otherwise you may want to configure static_ips for the web instance group and just reach the web UI directly.

Upgrading & maintaining Concourse

With BOSH, the deployment manifest is the source of truth. This is very similar to Concourse's own philosophy, where all pipeline configuration is defined in a single declarative document.

So, to add more workers or web nodes, just change the instances value for the instance group and re-run bosh deploy.

To upgrade, just upload the new releases and re-run bosh deploy.

Supporting external workers

If you need workers that run outside of your BOSH managed deployment (e.g. for testing with iOS or in some special network), you'll need to make some tweaks to the default configuration of the tsa job.

The TSA is the entryway for workers to join the cluster. For every new worker key pair, the TSA will be told to authorize its public key, and the workers must also know the TSA's public key ahead of time, so they know who they're connecting to.

Configuring the TSA's host key

First you'll need to remove the "magic" from the default deployment. By default, Concourse generates a key pair for the TSA, and gives the public key to the workers so they can trust the connection. This key occasionally gets cycled, which is fine so long as things are in one deployment, but once you have external workers you would have to update them all manually, which is annoying.

To fix this, generate a passwordless key pair via ssh-keygen, and provide the following properties in your BOSH deployment manifest:

host_key on the tsa job

the contents of the private key for the TSA server

host_public_key on the tsa job

the public key of the TSA, for the workers to use to verify the connection

For example (note that this manifest omits a bunch of stuff):

- name: web
  - name: tsa
    release: concourse
      host_key: |
        -----BEGIN RSA PRIVATE KEY-----
        <a bunch of stuff>
        -----END RSA PRIVATE KEY-----
      host_public_key: "ssh-rsa blahblahblah"
  - # ...

After setting these properties, be sure to run bosh deploy.

Authorizing worker keys

We've left the worker keys auto-generated so far, which is fine for workers deployed alongside the TSA, as it'll also automatically authorize them.

External workers however will need their own private keys, and so the TSA must be told to authorize them.

To do so, set the following properties:

authorized_keys on the tsa job

the array of public keys to authorize

tsa.private_key on the groundcrew job

the private key for the worker to use when accessing the TSA and tsa.host_public_key on the groundcrew job

if the worker is in a separate deployment, these must be configured to reach the TSA

garden.forward_address on the groundcrew job

if the worker is in a separate deployment, this must be the locally-reachable Garden address to forward through the TSA; e.g.:

baggageclaim.forward_address on the groundcrew job

if the worker is in a separate deployment, this must be the locally-reachable Baggageclaim address to forward through the TSA; e.g.:

Once again, after setting these properties run bosh deploy to make the changes take place.

Making the TSA reachable

Typically the TSA and ATC will both be colocated in the same instance group. This way a single load balancer can be used with the following scheme:

  • expose port 443 to 8080 (ATC's HTTP port) via SSL

  • expose port 2222 to 2222 (TSA's SSH port) via TCP

Be sure to update any relevant security group rules (or equivalent in non-AWS environments) to permit both access from the outside world to port 2222 on your load balancer, and access from the load balancer to port 2222 on your TSA + ATC instances.

The BOSH deployment manifest would then colocate both jobs together, like so:

- name: web
  vm_type: web_lb
  - name: atc
    release: concourse
    # ...
  - name: tsa
    release: concourse
    # ...

In AWS, the web_lb VM type would then configure cloud_properties.elbs to auto-register instances of web with an ELB. See the AWS CPI docs for more information.

Concourse Architecture

This topic isn't crucial to understanding Concourse; if you're just getting started and have finished the Installing section, you may want to first move on to Using Concourse.

Concourse is a fairly simple distributed system built up from the following components. You'll see them referenced here and there throughout the documentation, so you may want to skim this page just to get an idea of what they are.

ATC: web UI & build scheduler

The ATC is the heart of Concourse. It runs the web UI and API and is responsible for all pipeline scheduling. It connects to PostgreSQL, which it uses to store pipeline data (including build logs).

Multiple ATCs can be running as one cluster; as long as they're all pointing to the same database, they'll synchronize using basic locking mechanisms and roughly spread work across the cluster.

The ATC by default listens on port 8080, and is usually colocated with the TSA and sitting behind a load balancer.

Note: for intercept to function, make sure your load balancer is configured to do TCP or SSL forwarding, not HTTP or HTTPS.

TSA: worker registration & forwarding

The TSA is a custom-built SSH server that is used solely for securely registering workers with the ATC.

The TSA only supports two commands: register-worker and forward-worker.

The register-worker command is used to register a worker directly with the ATC. This should be used if the worker is running in the same (private) network as the ATC.

The forward-worker command is used to reverse-tunnel a worker's addresses through the TSA and register the forwarded connections with the ATC. This allows workers running in arbitrary networks to register securely, so long as they can reach the TSA. This is much safer than opening the worker up to the outside world.

The TSA by default listens on port 2222, and is usually colocated with the ATC and sitting behind a load balancer.

Workers: container runtime & cache management

Workers are machines running Garden and Baggageclaim servers and registering themselves via the TSA.

Workers have no important state configured on their machines, as everything runs in a container and thus shouldn't care about what packages are installed on the host (well, except for those that allow it to be a worker in the first place). This is very different from workers in other non-containerized CI solutions, where the state of packages on the worker is crucial to whether your pipeline works or not.

Each worker registers itself with the Concourse cluster via the TSA.

Workers by default listen on port 7777 for Garden and port 7788 for Baggageclaim. If they are within a private network reachable by the ATC, they'll probably bind on all addresses ( and register themselves directly. Otherwise they should bind on and forward themselves through the TSA.

Worker Pools

This topic isn't crucial to understanding Concourse; if you're just getting started and have finished the Installing section, you may want to first move on to Using Concourse.

The ATC does all of its scheduling using the pool of workers registered with it. There is no way to configure the workers through the ATC itself; instead, each worker continuously heartbeats its presence to the ATC, along with basic statistics like how many containers it's running. To learn more about how this works, see Registering via the TSA.

Each worker has a platform and an optional set of tags, both of which determine the workloads that the ATC can run on the worker. A Task that configures its platform as linux can only run on workers advertising the same platform. Similarly, a step in a build plan that configures tags can only run on workers matching a superset of those tags.

A worker's platform is typically one of linux, windows, or darwin, but this is just convention; if you have some esoteric platform like solaris or windows-16bit or super-nintendo the same rules apply.

A worker's tags are entirely up to you: you would configure them if you know that the worker meets some special requirement, such as living within a particular network or having particular hardware capabilities. If a worker specifies tags, it is taken out of the "default" placement pool, and steps only run on the worker if they explicitly specify a common subset of the worker's tags, by setting tags.

Each worker also advertises which resource types it supports natively. This is just a mapping from resource type (e.g. git) to the location of the resource's image on the worker (e.g. /opt/resources/git). You likely won't have to configure this yourself as all official Concourse distributions have a default set of core resource types that they'll automatically advertise, and the rest should really be configured in your pipeline via resource_types rather than being added to your workers.

The /api/v1/workers API

The most direct way to see existing workers and register new ones is via the /api/v1/workers API endpoint provided by the ATC.

It supports the following methods:

GET /api/v1/workers

List the current worker pool.

POST /api/v1/workers[?ttl=30s]

Register a worker, with an optional TTL.

For example, to register a worker for 10 seconds:

curl -XPOST -d '{
  "name": "image-building-box",
  "platform": "linux",
  "tags": ["hetzner"],
  "addr": "",
  "baggageclaim_url": "",
  "active_containers": 123,
  "resource_types": [
    {"type": "git", "image": "/opt/concourse/resources/git"}

The ?ttl=10s means that the worker's registration will go away in 10 seconds. Workers should continuously heartbeat their presence, so that if/when they go away the TTL will expire. Note that there is currently no way to explicitly unregister a worker; instead you should just submit a low TTL (e.g.1s).

The worker JSON object contains the following attributes:

Optional. A logical name for the worker. This should be something that stays the same throughout the worker's lifecycle, for example the hostname. If not specified, the addr is used.

Required. The platform supported by the worker, e.g. linux, darwin, or windows.

Optional. A set of arbitrary tags. Only steps specifying a matching a subset of these tags will be placed on the worker.

Required. The address of the Garden server. Note that this address must be reachable by the ATC, and has no authentication. For this reason it should always be an address only reachable from within Concourse's private network. To register external workers, see Registering via the TSA.

Optional. The API endpoint for a Baggageclaim server to use for caching artifacts on the worker. If not specified, caching will not be utilized.

Optional. The number of containers currently running on the worker.

Optional. The set of resource types supported by the worker. If specified, the worker may be used for running resource containers of the given type, using the specified image URI.

Registering via the TSA

Using the /api/v1/workers API directly is a bit inconvenient. Your workers need credentials for the ATC, and must advertise an address that the ATC can reach. The API consumer would also have to continuously heartbeat so long as the server it's advertising is healthy.

A typical Concourse installation includes a component called the TSA. The TSA is used to securely register workers via SSH, and continuously health-check and heartbeat them to the ATC. These workers can come from any network that can reach the TSA, which is usually colocated with the ATC and possibly sitting behind the same load balancer (but on a different port, usually 2222).

This is slightly different from how most other CI systems work. Concourse inverts the dependency flow so that scaling up workers can be done easily without reconfiguring the ATC.

This flow also makes it easier to securely register remote workers living in private networks. Rather than making the worker VM publicly reachable (very dangerous!) or using a VPN (a bit sketchy depending on where the ATC lives, and often error prone), workers can listen on and forward their connections through the TSA via a reverse SSH tunnel.

Standalone binary workers

If you're using the standalone binaries, the SSH aspect of worker registration is automated for you; you'll just need to generate a key pair for the worker and configure the workers with the expected TSA host key. See Starting Workers for more details.

If you're registering standalone binary-provisioned workers with a BOSH-deployed Concourse, you'll want to go through the Supporting external workers section first.

BOSH deployed workers

If you have some other environment separate from your primary Concourse deployment, but still manageable by BOSH, configuration is done via properties, pointing at the main deployment.

To BOSH deploy your workers, follow largely the same instructions as in Clusters with BOSH, but you may strip things down a bit, e.g. don't bother deploying tsa, atc, or postgresql for your workers.

If you're registering these workers workers with a BOSH-deployed Concourse, you'll want to go through the Supporting external workers section first.

You'll also need to configure the workers to register with your main cluster. The Concourse release includes a job called groundcrew, which can be configured to register a (usually colocated) worker with a TSA.

To configure Groundcrew, you may want to set some of the following properties:


An array of tags to advertise for the worker. You'll probably want to specify this, so that only steps explicitly targeting the worker (via their own tags) run on it.


The platform to advertise for the worker. Defaults to linux.

The address of the TSA server. This will probably be the address of your external Concourse cluster.


The port of the TSA server. Defaults to 2222.


The private key to use when authenticating with the TSA. This must be authorized by the TSA in advance

For more information, see Supporting external workers for BOSH or Generating Keys for the standalone binaries.


The public key to expect when connecting to the TSA.

For more information, see Supporting external workers for BOSH or Generating Keys for the standalone binaries.

Note that most properties have sane defaults. To see a full set of properties and what a manifest may look like when specifying them, browse around the docs at

With the above properties, we know what kind of worker we'll be advertising, but not how the ATC will reach it. There are two options: either directly advertising an address reachable by the ATC, or by forwarding a local address via a reverse SSH tunnel to the TSA, who will then advertise its tunnelled address to the ATC.

Registering a worker directly

To directly advertise an address to the TSA, set the following properties on the groundcrew job:


The address of the Garden server to advertise to the TSA. Note that this must be the external address. If omitted, Groundcrew will automatically determine this address, so you probably won't need to specify it.


The address of the Baggageclaim server to advertise to the TSA. Note that this must be the external address. If omitted, Groundcrew will automatically determine this address, so you probably won't need to specify it.

You would do this if your worker is not reachable by the outside world, but is reachable by the ATC. For example, a separate deployment within the same VPC. (Note: making a Garden server publicly reachable is a very bad idea.)

Forwarding a local Garden server

To forward a local Garden server through the TSA, set the following properties on the groundcrew job:


The locally-reachable Garden address to forward through the TSA, e.g.


The locally-reachable Baggageclaim address to forward through the TSA, e.g.

You would do this if your worker lives in a private network (e.g. a local cluster), but your TSA is publicly reachable (which is much safer).


This topic isn't crucial to understanding Concourse; if you're just getting started and have finished the Installing section, you may want to first move on to Using Concourse.

Metrics are essential in understanding how any large system is behaving and performing. Concourse can emit metrics about both the system health itself and about the builds that it is running. Operators can tap into these metrics in order to observe the health of the system.

In the spirit of openness, the metrics from our deployment are public. We consider it a bug to emit anything sensitive or secret into our metrics pipeline.

Concourse components and virtual machines emit metrics to Riemann. Riemann handles the processing of this event stream and the forwarding to various Time Series Databases (TSDBs). To fully understand how to take advantage of Concourse metrics you should familiarise yourself with the concepts in Riemann. I'll wait here while you read through that (don't worry, it's not very long!).

Riemann events can contain both tags and attributes. We use them for different things in Concourse metrics. We use custom attributes for tagging all metrics with context specific information about the metric. Such as the deployment, pipeline, or build that it relates to. We don't use the standard build-in tags much as they are normally used for conditionals in Riemann's stream processing which we don't rely on and don't provide the key-value data storage that we need. This is our own convention and isn't shared by everything in the Riemann community but we find that it works well.

Deploying the Metrics Infrastructure

We've made a few BOSH releases that you can collocate and deploy to get a similar setup to ours. If you will be emitting to DataDog then you will only need the Riemann release.

The documentation for each of these lives in the release themselves.

If you want to mirror our setup then you should set up Riemann to emit metrics to InfluxDB and then point Grafana at the same InfluxDB.

If you set your Concourse to log everything at or above the debug level then all metrics will be logged as well as emitted. This is useful if you haven't yet set up a Riemann server.

Concourse Metrics

This reference section lists of all of the metrics that Concourse emits. We don't include the warning and critical levels as they will keep changing as we optimise the system. To find those, please refer to the source of truth: the code.

scheduling: full duration (ms)

This is the time taken (in milliseconds) to schedule an entire pipeline including the time taken to load the version information from the database and calculate the latest valid versions for each job.



The pipeline which was being scheduled.

scheduling: loading versions duration (ms)

This is the time taken (in milliseconds) to load the version information from the database.



The pipeline which was being scheduled.

scheduling: job duration (ms)

This is the time taken (in milliseconds) to calculate the set of valid input versions when scheduling a job. It is emitted once for each job per pipeline scheduling tick.



The pipeline which was being scheduled.


The job which was being scheduled.

worker containers

The number of containers that are currently running on your workers.



The name of the worker.

worker volumes

The number of volumes that are currently present on your workers.



The name of the worker.

build started

This event is emitted when a build starts. Its value is the build ID of the build. However, it is most useful for annotating your metrics with the start and end of different jobs.



The pipeline which contains the build being started.


The job which configured the build being started.


The name of the build being started. (Remember that build numbers in Concourse are actually names and are strings).


The ID of the build being started.

build finished

This event is emitted when a build ends. Its value is the duration of the build in milliseconds. You can use this metric in conjunction with build started to annotate your metrics with when builds started and stopped.



The pipeline which contains the build that finished.


The job which configured the build that finished.


The name of the build that finished. (Remember that build numbers in Concourse are actually names and are strings).


The ID of the build that finished.


The resulting status of the build; one of "succeeded", "failed", "errored", or "aborted".

http response time

This metric is emitted for each HTTP request to an ATC (both API and web requests). It contains the duration (in milliseconds) for each request and is useful for finding slow requests.



The route which the HTTP request matched. i.e. /builds/:id


The literal path of the HTTP request. i.e. /builds/1234

Service and Infrastructure Metrics

You may have seen on our dashboard that we have other metrics in addition to those defined above. Riemann has existing tooling for gathering metrics from machines and other common services. These can be found in the riemann-tools gem. We use riemann-health, riemann-aws-rds-status, and riemann-net.


This topic isn't crucial to understanding Concourse; if you're just getting started and have finished the Installing section, you may want to first move on to Using Concourse.

Automating everything means authorizing something to automate many things. This makes CI systems a high-risk target for security leaks.

Concourse pipelines are loaded with credentials: resources are configured with private keys, tasks are given credentials to servers they integrate against via params, etc. etc. etc. If someone gets their hands on your config, they have access to everything.

To mitigate this, Concourse supports encrypting sensitive information before it reaches the database. This way the plaintext credentials only exist in memory for as long as they need to, and if someone gains access to your database, they can't so easily gain the keys to the kingdom.

We strongly encourage anyone running Concourse to configure encryption. In the long run, Concourse shouldn't be storing these credentials in the first place, which is the long term plan, but encryption will still be necessary in the future if not just to store the keys to a credential manager once we support it.

What's encrypted?

The following values are expected to contain credentials, and so will be encrypted:

  • Resource sources, as they often contain private keys and other credentials for writing to (or simply granting access to) the resource.

  • Resource type sources, for the same reason as above, though this is probably a less common use case.

  • Task step params, as they are often used to configure access to e.g. an external deployment that the task is integrating with.

  • Put step params and get step params are also encrypted, even though they rarely should contain credentials (they're usually in source).

  • Team auth configurations, as they often contain things like GitHub or other oAuth client secrets.

Note that the actual implementation encrypts things in a more heavy-handed way than the above list implies. For example, pipeline configs are actually encrypted as one large blob.

Notably, the following things are NOT encrypted:

  • Build logs. If your jobs are outputting credentials, encryption won't help you. We have chosen not to tackle this initially as it would introduce a performance burden for what is not as much of an obvious win.

  • Resource versions. These should never contain credentials, and are often meaningless on their own.

  • Resource metadata. These are visible to anyone if your pipeline is exposed, and should never contain credentials.

  • Pipeline names, job names, etc. - anything else that is not a high-risk target for credential leakage, as opposed to regular information leaks.

    Resources and jobs in particular exist in their own tables, with their names in plaintext, and only their config encrypted. In this way, names are not protected, even though the pipeline config itself is also stored as one big encrypted blob.

Enabling Encryption

To enable encryption, you'll just need to come up with a 16 or 32-byte random character sequence and configure it as --encryption-key flag to the web command. For BOSH, this is the encryption_key property.

On startup, the ATC will encrypt all existing plaintext data, and any new data being written will be encrypted before it's sent over the network to the database.

The initial bulk encryption shouldn't take too long, but it will scale linearly with the amount of data that you have, and if another ATC is running it'll suddenly not be able to read the data until it's also given the key. So, expect some downtime.

Rotating the Encryption Key

To swap out the encryption key, you'll need to pass the previous key as --old-encryption-key (or old_encryption_key), and the new key as --encryption-key (or encryption_key).

On startup, the ATC will decrypt all existing data and re-encrypt it with the new key, in one go. If it encounters a row which is already encrypted with the new key, it will continue on (as may be the case when restarting with the flags again, or if the ATC died in the middle of rotating).

If the ATC encounters a row which cannot be decrypted with neither the old key nor the new one, it will log loudly and fail to start, telling you which row it choked on. This data must be dealt with in some way, either by re-configuring the key the row was encrypted with as the old key, or manually performing database surgery to remove the offending row. Hopefully this doesn't happen to you!

Disabling Encryption

To opt out of encryption entirely (I'm sure you have your reasons), simply pass --old-encryption-key (or old_encryption_key) alone. With no new encryption key, the ATC will decrypt all existing data on start.

Credential Management

This topic isn't crucial to understanding Concourse; if you're just getting started and have finished the Installing section, you may want to first move on to Using Concourse.

Going beyond Encryption, explicit credential management will provide credentials to your builds for a brief amount of time, without being persisted anywhere. It also allows for credentials to be rotated and managed external to the pipeline or team, and prevents them from being revealed by get-pipeline.

Currently, the only supported credential manager is Vault, but more will come in the future.

Credential management works by replacing the credentials with ((parameters)) in your pipeline or task config. When the ATC is about to run the step or check that is configured with the parameters, it will resolve them by fetching the values from the credential manager. If the values are not present, the action will error.

What can be parameterized?

The following configurations can be parameterized with a credential manager:

Where these values are looked up and how the credential manager is configured depends on the backend. Consult the relevant section below for whichever backend you want to use.

Using Vault


The ATC is statically configured with a Vault server URL (plus any TLS config), and either a client token or an auth backend.

For example, to point the ATC at an internal Vault server with TLS signed by a local CA, using an AppRole auth backend, you may configure:

concourse atc ... \
  --vault-url \
  --vault-ca-cert /etc/my-ca.cert \
  --vault-auth-backend approle \
  --vault-auth-param role_id=db02de05-fa39-4855-059b-67221c5c2f63 \
  --vault-auth-param secret_id=6a174c20-f6de-a53c-74d2-6018fcceff64

You may instead want to use the TLS auth backend, in which case you would specify a client certificate and private key, like so:

concourse atc ... \
  --vault-url \
  --vault-ca-cert /etc/my-ca.cert \
  --vault-client-cert /etc/my-client.cert \
  --vault-client-key /etc/my-client.key \
  --vault-auth-backend cert

In this case no params are necessary, as the TLS auth backend will check the certificate against all roles if no name is specified.

Alternatively, if you've got a periodic token created, you can pass it directly as --vault-client-token. Make sure the period is long enough to account for any ATC downtime, however, including the time between generating it and getting the ATC running.

concourse atc ... \
  --vault-url \
  --vault-ca-cert /etc/my-ca.cert \
  --vault-client-token c2c2fbd5-2893-b385-6fa5-30050439f698

For all of these configurations, the ATC will periodically renew its token, ensuring it doesn't expire.

Credential Lookup Rules

When resolving a parameter such as ((foo_param)), it will look in the following paths, in order:

  • /concourse/TEAM_NAME/PIPELINE_NAME/foo_param

  • /concourse/TEAM_NAME/foo_param

The leading /concourse can be changed by specifying --vault-prefix.

If the action is being run in the context of a pipeline (e.g. a check or a step in a build of a job), the ATC will first look in the pipeline path. If it's not found there, it will look in the team path. This allows credentials to be scoped widely if they're common across many pipelines.

If an action is being run in a one-off build, the ATC will only look in the team path.

Caveat: multiple fields of a rotating key

When a key has multiple fields and is re-generated on every fetch, for example when using the AWS secret backend, the ATC will currently fetch the credential once for each parameter, resulting in fields that don't match each other. This will be fixed in an upcoming release of Concourse.

Using Concourse

Now that you're all set up and have a Concourse running, it's time to see what this thing can do.

As per tradition, there is a pretty useless Hello, world! tutorial. This will at least show the basics of fly, which you can read more about in great detail at The Fly CLI, so it's still worth a read.

Hello, world!

Once you've got your Concourse all set up, you'll want to download the Fly CLI from your instance via the web UI. You can find the download links by browsing to the main page:

If you're on Linux or OS X, you will have to chmod +x the downloaded binary and put it in your $PATH. This can be done in one fell swoop with install path/to/fly /usr/local/bin/fly

Throughout this page we'll assume you've spun up the concourse/lite Vagrant box, but if you've set up Concourse in some other way, everything still applies; just use your URL instead of

Once you have fly, you'll want to target your local VirtualBox like so:

$ fly -t lite login -c

This will save the target as lite, which we'll use for all later commands that we want to run against the local box by using the same -t flag.

Out of the box, Concourse doesn't have any Pipelines configured. That's boring. Here, have this YAML!


- name: hello-world
  - task: say-hello
      platform: linux
        type: docker-image
        source: {repository: ubuntu}
        path: echo
        args: ["Hello, world!"]

Dump that into a file called hello.yml, and run the following:

$ fly -t lite set-pipeline -p hello-world -c hello.yml

If you refresh the main page, you should now see the world's tiniest pipeline!

Newly configured pipelines are paused by default (indicated by the blue bar along top of the page), to prevent them from running freely when you're not sure what they do yet. To unpause it, you can either reveal the pipelines sidebar via the button in the top left and press , or run:

$ fly -t lite unpause-pipeline -p hello-world

You can always fetch the current configuration of a pipeline by running:

$ fly -t lite get-pipeline -p hello-world

This pipeline configuration so far describes a single job, whose plan has a single task. There are no resources for this job to automatically trigger so we'll have to give it a helping hand. Click the job and then press the button in the top right. The task boots up, prints Hello, world!, and completes successfully:

Let's spice things up a bit. A pipeline with a single job with no resources isn't very interesting, so let's add a simple resource in the form of a timed trigger, using the Time resource.


- name: every-1m
  type: time
  source: {interval: 1m}

- name: navi
  - get: every-1m
    trigger: true
  - task: annoy
      platform: linux
        type: docker-image
        source: {repository: ubuntu}
        path: echo
        args: ["Hey! Listen!"]

Run the following to reconfigure the existing pipeline:

$ fly -t lite set-pipeline -p hello-world -c navi-pipeline.yml

Your pipeline should now look like this:

Every minute, the every-1m resource will yield a new version. This resource is used in a get step in the navi job, which sets trigger to true, so as a result the job will run every minute.

Now may be a good time to start learning about the different types of resources or start looking at Tutorials.


It can be hard to learn a new tool without having something you want to accomplish. These tutorials will at least guide you through particular use cases to show you how you'd use Concourse in real-world scenarios.

You may also want to just dive right in to some living, breathing Concourse pipelines. These aren't accompanied by documentation explaining them, but it may be useful just to see some real-world use cases:

Flight School

Welcome, Cadet.

This walkthrough assumes you've already got a Concourse set up and ready to go. If you don't, the most important thing is that you don't panic and go through the installation guide before coming back here.

Getting Set Up

First, let's get you a copy of your training materials.

Fork this repository on GitHub into your own account. This will give you somewhere to keep your work.

Next, clone your fork down to your local machine by running this command:

$ git clone username)/flight-school

This is a simple Ruby project that will let you get a feel for flying. Let's check everything is working by running the tests locally.

$ cd flight-school
$ bundle install
$ bundle exec rspec

If everything is working then you should see a few green dots on the screen to let you know that the tests are passing. If something failed then make sure to fix it and get the tests green before continuing. For example, if you're missing the bundle executable then you'll need to run gem install bundler.

First Steps

Ok, we've got a project. We want a pipeline. It's important to build a pipeline up gradually rather than attempting to do the whole thing at once. Let's start at the start by running those unit tests we just ran in Concourse.

Download fly from your Concourse. You can find a download link on your Concourse installation main page. They will either be at the bottom right or in the middle if you don't have any pipelines yet.

$ mkdir -p $HOME/bin
$ install $HOME/Downloads/fly $HOME/bin

Make sure that $HOME/bin is in your path.

You should now be able to run fly --help to see if everything is working.

Ok, let's target and log in to our Concourse.

$ fly -t ci login -c (your concourse URL)

The -t flag is the name we'll use to refer to this instance in the future. The -c flag is the concourse URL that we'd like to target.

Depending on the authentication setup of your Concourse it may prompt you for various credentials to prove you are who you say you are.

Right, let's try running the current project in Concourse.

$ fly -t ci execute
error: the required flag '-c, --config' was not specified

Huh. Well then, let's give it that flag with a file that it wants.

$ fly -t ci execute -c build.yml
error: invalid argument for flag '-c, --config' (expected flaghelpers.PathFlag): path 'build.yml' does not exist

Alright, so we need a file called build.yml. Let's create it under the flight-school directory.

$ touch build.yml
$ fly -t ci execute -c build.yml
error: invalid task configuration:
  missing 'platform'
  missing path to executable to run

Surprise, surprise - looks like we can't just give it an empty file. We need to write a task definition. A task definition describes a unit of work to Concourse so that it can execute it.

platform: linux

  type: docker-image
  source: {repository: busybox}

- name: flight-school

  path: ./flight-school/ci/

Let's go through this line by line:

  • The platform simply states that we would like this task to run on a Linux worker.

  • The image_resource section declares the image to use for the task's container. It is defined as a resource configuration.

  • The inputs section defines a set of things that we need in order for our task to run. In this case, we need the flight-school source code in order to run the tests on it.

  • The final run section describes how Concourse should run the task. By default Concourse will run your script with a current working directory containing all of your inputs as subdirectories.

Ok, let's save that in build.yml and run our command again.

$ fly -t ci execute -c build.yml
executing build 107401
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 40960    0 40960    0     0   662k      0 --:--:-- --:--:-- --:--:--  800k
running flight-school/ci/
exec failed: exec: "./flight-school/ci/": stat ./flight-school/ci/ no such file or directory

Ok, so what happened here. We started a build, uploaded the flight-school input, and then tried to run the flight-school/ci/ script. Which isn't there. Oops! Let's write that.


set -e -x

pushd flight-school
  bundle install
  bundle exec rspec

This is basically the same commands we ran above in order to run the tests locally. The new bits at the start set up a few things. The #!/bin/bash is a shebang line that tells the operating system that when we execute this file we should run is using the /bin/bash interpreter. The set -e -x line is setting a few bash options. Namely, -e make it so the entire script fails if a single command fails (which is generally desirable in CI). By default, a script will keep executing if something fails. The -x means that each command should be printed as it's run (also desirable in CI).

Let's give this new script a whirl.

$ fly -t ci execute -c build.yml
executing build 107401
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 40960    0 40960    0     0   662k      0 --:--:-- --:--:-- --:--:--  800k
running flight-school/ci/
exec failed: exec: "./flight-school/ci/": permission denied

This error message means that the script we told it to run is not executable. Let's fix that.

$ chmod +x ci/

Running again gives us:

$ fly -t ci execute -c build.yml
executing build 107401
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 40960    0 40960    0     0   662k      0 --:--:-- --:--:-- --:--:--  800k
running flight-school/ci/
exec failed: no such file or directory

This message is a little obscure. It's complaining that the shebang (/bin/bash) can't find the interpreter. In our task config, we specified the busybox Docker image, which is a tiny, un-opinionated operating system image that doesn't contain bash. This isn't very useful for running builds so let's pick one that is.

Docker maintains a collection of Docker images for common languages. Let's use the ruby image at version 2.4.1. We can specify that the task should run with this image by updating the image_resource block in our build.yml like so:

  type: docker-image
    repository: ruby
    tag: 2.4.1

Let's try running that.

$ fly -t ci execute -c build.yml
executing build 107418
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 40960    0 40960    0     0   7657      0 --:--:--  0:00:05 --:--:--  9884
Pulling ruby@sha256:f7cb2fda8271b272f9adb5f396937c09499113a2e5299e871efb04195dabfc96...
sha256:f7cb2fda8271b272f9adb5f396937c09499113a2e5299e871efb04195dabfc96: Pulling from library/ruby
10a267c67f42: Pulling fs layer
[... docker output ...]
b8b6e6204a6b: Pull complete
Digest: sha256:f7cb2fda8271b272f9adb5f396937c09499113a2e5299e871efb04195dabfc96
Status: Downloaded newer image for ruby@sha256:f7cb2fda8271b272f9adb5f396937c09499113a2e5299e871efb04195dabfc96

Successfully pulled ruby@sha256:f7cb2fda8271b272f9adb5f396937c09499113a2e5299e871efb04195dabfc96.

running ./flight-school/ci/
+ pushd flight-school
/tmp/build/e55deab7/flight-school /tmp/build/e55deab7
+ bundle install
Fetching gem metadata from
Fetching version metadata from
Installing public_suffix 2.0.5
Installing backports 3.7.0
Installing safe_yaml 1.0.4
Installing diff-lcs 1.3
Installing hashdiff 0.3.2
Installing multi_json 1.12.1
Installing rack 1.6.5
Installing rspec-support 3.5.0
Installing tilt 2.0.7
Using bundler 1.14.6
Installing addressable 2.5.1
Installing crack 0.4.3
Installing rack-protection 1.5.3
Installing rack-test 0.6.3
Installing rspec-core 3.5.4
Installing rspec-expectations 3.5.0
Installing rspec-mocks 3.5.0
Installing webmock 2.3.2
Installing sinatra 1.4.8
Installing rspec 3.5.0
Installing sinatra-contrib 1.4.7
Bundle complete! 5 Gemfile dependencies, 21 gems now installed.
Bundled gems are installed into /usr/local/bundle.
+ bundle exec rspec

Randomized with seed 17037

Finished in 1.01 seconds (files took 0.35998 seconds to load)
9 examples, 0 failures

Randomized with seed 17037

+ popd

Woohoo! We've run our unit tests inside Concourse. Now is a good time to commit and push.

In general, try and think in terms of small reusable tasks that perform a simple action with the inputs that they're given. If a task ends up having too many inputs then it may be a smell that your task is doing too much. Similar to if a function in a program you were writing had a lot of parameters. In fact, that's a good way to think about tasks: they're functions that take inputs as parameters. Keeping them small and simple allows you to easily run them from your local machine as above.

Please excuse the long-winded iterative process we used to get to the final result. You'll end up writing enough tasks that you can skip directly to the end. We felt it was important to go through all the possible hurdles you may encounter on your journey.

Starting a Pipeline

Ok, so. We have a task we can run. How about we run that every time the code changes so that we can check to see when anything breaks. Enter: pipelines.

Pipelines are built up from resources and jobs. Resources are external, versioned things such as Git repositories or S3 buckets and jobs are a grouping of resources and tasks that actually do the work in the system.

- name: flight-school
  type: git
    uri: username)/flight-school
    branch: master

- name: test-app
  - get: flight-school
  - task: tests
    file: flight-school/build.yml

Uploading that:

fly -t ci set-pipeline -p flight-school -c ci/pipeline.yml
pipeline created!
you can view your pipeline here: https://(your concourse url)/pipelines/flight-school

the pipeline is currently paused. to unpause, either:
  - run the unpause-pipeline command
  - click play next to the pipeline in the web ui

Follow the instructions to unpause the pipeline.

Click the job. Then click run.

It runs!

Running Continuously

Add trigger: true to the get. Any new versions will trigger the job.

- name: test-app
  - get: flight-school
    trigger: true
  - task: tests
    file: flight-school/build.yml

Try pushing a commit to the repository. For extra credit push a commit that breaks the build and then another one that fixes it again.

Extending the Pipeline

What you've seen so far is the very essence of the way pipelines work in Concourse: a new version of a resource appears and then a job runs to do something with that new version. This is the very start of a pipeline. Let's keep adding to it and have it do more things for us.

Deploying your Application

We've shown ourselves that we have a working application. Let's show the rest of the world by deploying the application to a PaaS (Platform as a Service).

The author of this walkthrough is most familiar with Pivotal Web Services (PWS). So that's what this guide is going to use. Feel free to use whichever deployment environment you'd like to use (such as Heroku, etc.). The mechanics and placement in the pipeline will be similar for all of them.

Follow this guide to get yourself set up with a PWS account and push some sample applications. There's a 60 day free trial so don't worry about this costing you anything.

Now, that you're familiar with how to deploy an application to PWS, let's have the pipeline do all the hard work for us. First, let's add a resource for the pipeline to interact with.

- name: staging-app
  type: cf
    username: ((your cf username))
    password: ((your cf password))
    organization: ((your cf organization))
    space: ((your cf space))

Then, in our job, we can push our application.

- name: test-app
  - get: flight-school
    trigger: true
  - task: tests
    file: flight-school/build.yml
  - put: staging-app
      manifest: flight-school/manifest.yml

Now, if we run the job again we'll see that after the tests have passed that Concourse deploys our application for the world to see. Try pushing some commits to the repository and seeing that they are automatically tested and pushed. (For example, try adding or removing a U.S. airport code from the list).

Adding and Integrating Second Component

Coming soon!

Next Steps

By now you have a good grasp on the practical side of using Concourse. For a more in-depth explanation of how things work you should read the two documents on the Mechanics of a Pipeline and Anatomy of a running task.

The rest of the documentation on the site provides reference material for Concourse. If you'd like to find out more about what attributes you can attach to a step in a plan or all the options that you can pass to fly then the links on the left hand side are the best place to start.

And, as always, if you can't find what you're looking for in the documentation then we're around on Slack during work hours and should be able to help you if you have further questions.

Versioned S3 Artifacts

This document will guide you through a pipeline modeled on a fairly common real-world use case of pushing tested, built artifacts into S3 buckets.

The end-to-end scenario is to monitor a Git repository for commits, and when new commits are detected, run its unit tests.

If the unit tests pass, the pipeline will then create a new release candidate artifact with automated versioning, which then will be placed in a S3 bucket.

From there, the pipeline will run integration tests against the release candidate, and if those pass, it will create a final artifact and "ship it" by putting it in a different S3 bucket.

The resulting pipeline will look like this:

First, we'll define our resources. These are the objects used in our pipeline. The resources configuration simply enumerates each of their locations.


Our first resource will be the location of our product's source code. Let's pretend it lives in a Git repo, and so we'll use the git resource type.

The git resource type requires two source parameters: uri and branch. We're using a SSH URI, so we'll also need to specify private_key.

To avoid embedding credentials in the pipeline config, we'll use a parameter.

- name: my-product
  type: git
    branch: master
    private_key: ((my-product-github-private-key))

We'll need a resource to represent the semantic version of our product, which we'll use to generate release candidates, and bump every time we ship. For this we'll use the semver resource type.

Currently, semver resources keep track of the version as a file in a S3 bucket, so we'll need to specify the credentials for the bucket, and a name for the file.

If your product already has a version number, you can specify it as initial_version. If not specified, the version will start as 0.0.0.

- name: version
  type: semver
    bucket: my-product-pipeline-artifacts
    key: current-version
    access_key_id: ((s3-access-key-id))
    secret_access_key: ((s3-secret-access-key))
    initial_version: 1.0.0

Let's define the resource for storing our product's release candidate artifacts generated by the pipeline. This is done with the s3 resource type.

The s3 resource type is minimally configured with a bucket name and a regexp, which will be used to match files in the bucket and order them by the version number extracted by the first capture group.

Since we'll be writing objects into this bucket, we'll need to configure it with AWS credentials.

- name: my-product-rc
  type: s3
    bucket: my-product-pipeline-artifacts
    regexp: my-product-(.*).tgz
    access_key_id: ((s3-access-key-id))
    secret_access_key: ((s3-secret-access-key))

We'll need one more s3 resource to represent shipped artifacts.

- name: my-product-final
  type: s3
    bucket: my-product
    regexp: my-product-(.*).tgz
    access_key_id: ((s3-access-key-id))
    secret_access_key: ((s3-secret-access-key))

Now that we've got all our resources defined, let's move on define the functions to apply to them, as represented by jobs


Our first job will run the unit tests for our project. This job will fetch the source code, using the get step with the my-product resource, and execute the Task configuration file living in the repo under ci/unit.yml using a task step.

We set trigger: true on the get step so that it automatically triggers a new unit build whenever new commits are pushed to the my-product repository.

- name: unit
  - get: my-product
    trigger: true
  - task: unit
    file: my-product/ci/unit.yml

Our pipeline now does something! But we're not quite delivering artifacts yet.

Let's consider anything making it past the unit tests to be a candidate for a new version to ship. We'll call the job that builds candidate artifacts build-rc.

Because this job makes modifications to our product version, we'll want to make sure it doesn't run concurrently with anything else doing the same thing. Otherwise we may generate versions or release candidates out of order.

This is done by specifying serial_groups, which is a list of arbitrary tags. We'll make sure to list the same tags in the other jobs which modify the version.

- name: build-rc
  serial_groups: [version]

First, let's be sure to only grab versions of my-product that have passed unit tests. Let's have new occurrences of these versions also trigger new builds, while we're at it.

  - get: my-product
    passed: [unit]
    trigger: true

We'll also need a new release candidate version number. For this, the semver resource type can be used to generate versions by specifying params in the get step.

Specifying pre: rc makes it so that if the current version is e.g. 1.2.3-rc.3, we'll get 1.2.3-rc.4.

  - get: version
    params: {pre: rc}

Now, we'll execute our build-artifact task configuration, which we'll assume has two inputs (my-product and version) and produces a file named my-product-VERSION.tgz in an output called built-artifact when executed.

  - task: build-artifact
    file: my-product/ci/build-artifact.yml

Now that we have a tarball built, let's put it up to the pipeline artifacts S3 bucket via the my-product-rc resource defined above.

Note that we refer to the task that generated the .tgz in the path specified by the from param.

  - put: my-product-rc
    params: {file: built-artifact/my-product-*.tgz}

We'll also need to push up the newly bumped version number, so that next time we bump it'll be based on this new one.

Note that the file param points at the version created by the version step above.

  - put: version
    params: {file: version/number}

Now we're cooking with gas. But still, we haven't shipped any actual versions of the project yet: only candidates! Let's move on to the later stages in the pipeline.

Let's assume there's some more resource-intensive integration suite that uses our product, as a black-box. This will be the final set of checks and balances before shipping actual versions.

Let's assume this suite has to talk to some external environment, and so we'll configure the job with serial: true here to prevent concurrent builds from polluting each other.

- name: integration
  serial: true

For the integration job, we'll need two things: the candidate artifact, and the repo that it came from, which contains all our CI scripts.

Note that this usage of passed guarantees that the two versions of my-product and my-product-rc respectively came out from the same build of build-rc. See get for more information.

  - get: my-product-rc
    trigger: true
    passed: [build-rc]
  - get: my-product
    passed: [build-rc]

We'll now run the actual integration task. Since it has to talk to some external environment, we'll use params to forward its credentials along to the task. See task for more information.

Again we'll use parameters in the config file to prevent hardcoding them.

  - task: integration
    file: my-product/ci/integration.yml
      API_ENDPOINT: ((integration-api-endpoint))
      ACCESS_KEY: ((integration-access-key))

At this point in the pipeline we have artifacts that we're ready to ship. So let's define a job that, when manually triggered, takes the latest candidate release artifact and publishes it to the S3 bucket containing our shipped product versions.

We'll call the job shipit. Since it'll also be modifying the version, we'll place it in the same serial group we specified for build-rc.

- name: shipit
  serial_groups: [version]

Similar to the integration job, we'll once again need both our source code and the latest release candidate, this time having passed integration together.

Note that we have not specified trigger: true this time - this is because with a typical release-candidate pipeline, the shipping stage is only ever manually kicked off.

  - get: my-product-rc
    passed: [integration]
  - get: my-product
    passed: [integration]

Now we'll need to determine the final version number that we're about to ship. This is once again done by specifying params when fetching the version.

This time, we'll only specify bump as final. This means "take the version number and chop off the release candidate bit."

  - get: version
    params: {bump: final}

Next, we'll need to convert the release candidate artifact to a final version.

This step depends on the type of product you have; in the simplest case it's just a matter of renaming the file, but you may also have to rebuild it with the new version number, or push dependent files, etc.

For the purposes of this example, let's assume we have a magical task that does it all for us, and leaves us with a file called my-product-VERSION.tgz in a built-product output, just as with the build-rc job before.

  - task: promote-to-final
    file: my-product/ci/promote-to-final.yml

And now for the actual shipping!

  - put: my-product-final
    params: {file: built-product/my-product-*.tgz}
  - put: version
    params: {file: version/number}

This is all well and good, but you may have noticed it's only good for shipping one version. We only ever go from release candidates to final versions, and never do any actual version bumps!

Bumping your product's version is very much a human decision, so for this example we'll just assume the product manager will come in and decide what the next version should be at some point.

The simplest way to implement this is to have three jobs: one for doing a major bump, one for doing a minor bump, and one for doing a patch bump. These three jobs will do both the bump itself, and immediately bump to -rc.1 of the new version. This is done by specifying both bump and pre params.

The major job simply has a put for bumping the version in-place. Because it's modifying the version, we'll use serial_groups to ensure it doesn't run concurrently with build-rc, shipit, and the other bump jobs.

- name: major
  serial_groups: [version]
  - put: version
    params: {bump: major, pre: rc}

The minor job is basically the same, but with bump: minor instead, unsurprisingly.

- name: minor
  serial_groups: [version]
  - put: version
    params: {bump: minor, pre: rc}

The patch job will follow the same approach, but with one twist: we want to immediately bump to the next patchlevel release candidate after shipping.

This is so the pipeline can start generating candidates for a new version without requiring the product manager to decide the version to target next. We do a patch bump just because it's the most conservative bump we can make for the next release before knowing what'll be in it.

We'll have the patch job auto-trigger by having a dummy get action that depends on something having made it through the shipit job, with trigger: true. We'll use the version resource for this since it's the smallest thing coming out of the shipit job.

- name: patch
  serial_groups: [version]
  - get: version
    passed: [shipit]
    trigger: true
  - put: version
    params: {bump: patch, pre: rc}

The Fly CLI

The fly tool is a command line interface to Concourse. It is used for a number of tasks from connecting to a shell in one of your build's containers to uploading new pipeline configuration into a running Concourse. Learning how to use fly will make using Concourse faster and more useful.

You can download fly from a Concourse. There are download links for common platforms in the bottom right hand corner of the main page.

Throughout this section we'll stick to the long-form name of every command and flag. Once you've learned what the commands do, you may want to consult fly -h to learn the short forms.

login: Authenticating with and saving Concourse targets

The first thing you'll want to do is authenticate with your target. This is done with the login command. This is also useful to save targets under a more convenient alias, so you don't have to type out the URL all the time:

The login command serves double duty: it authenticates with a given endpoint, and saves it under a more convenient name. The name and token are stored in ~/.flyrc (though you shouldn't really edit the file manually).

Concourse deployments can be occupied by multiple teams. To specify the team to which to log in, specify the --team-name or -n flag. If not specified, this defaults to main.

So, to log in to a team my-team an endpoint served at and save it as the more convenient name example, you would run:

$ fly --target example login --team-name my-team \

The login command will see which authentication methods are available for the specified team and prompt you to choose one. For basic auth, it will ask your username and password and use them to acquire a token. For OAuth, it will give you a link to click, and after you've gone through the OAuth flow it will print an OAuth token on the page that you can then then copy and paste into the prompt.

Note that if no authentication methods are configured (e.g. when targeting a concourse/lite Vagrant box), fly will acquire a token without any prompting. You can then use the alias like normal.

In any case, a token is saved in your ~/.flyrc, which will expire after one day.

If your Concourse uses SSL but does not have a certificate signed by a trusted CA, you can use the --ca-cert flag so that fly can trust the connection, like so:

$ fly -t example login -c --ca-cert ./ca.crt

This will read the value out of the file ./ca.crt and save it into ~/.flyrc so you don't have to pass it on every login invocation.

After you've logged in you can use --target example (or -t example for short) to run a command against the saved target example. For eample, fly -t example builds will list the last few builds on the example Concourse instance.

The -t flag is intentionally stateless and must be explicitly added to each command. This reduces the risk of accidentally running a command against the wrong environment when you have multiple targets defined.

logout: Remove authentication and delete Concourse targets

There are cases when you would like to remove all evidence of a particular target. This is achieved by the logout command. There are two variants of this command, one to get rid of a specific target, and another to remove all targets from the ~/.flyrc file.

To remove a specific target run:

$ fly -t example logout

To remove all targets run:

$ fly logout -a

Note: These two variations are mutually exclusive. If the target parameter -t and all parameter -a are both specified, an error will occur.

targets: List the current targets

To see what targets are currently known to fly, run:

$ fly targets

This will show each target's name, URL, and when its token expires.

sync: Update your local copy of fly

Occasionally we add additional features to fly or make changes to the communiction between it and Concourse's API server. To make sure you're running the latest and greatest version that works with the Concourse you are targeting we provide a command called sync that will update your local fly. It can be used like so:

$ fly -t example sync

If you see anything odd, for example the output of builds not rendering or certain commands throwing errors, be sure to run this command before investigating any further; it's pretty likely that something has changed with the API, especially while we're pre-1.0.

teams: Listing configured Teams

To list all the teams, run:

$ fly -t example teams

This can be useful if you've forgotten your team name.

set-team: Creating and updating Teams

To create a new team or update an existing team's auth, first log in as the main team, and then run:

$ fly -t example set-team --team-name my-team \
    --basic-auth-username foo \
    --basic-auth-password bar

Teams can also update their own configuration with the same command.

There are many different ways to configure auth; see Configuring Auth for more information.

destroy-team: Removing Teams

To remove a team, including all of its pipelines and one-off builds, first log in as the main team, and then run:

$ fly -t example destroy-team --team-name my-team

Currently, if there were any workers assigned specifically to this team, they'll be orphaned, without having their containers or volumes cleaned up.

execute: Submitting Local Tasks

One of the most common use cases of fly is taking a local project on your computer and submitting it up with a task configuration to be run inside a container in Concourse. This is useful to build Linux projects on OS X or to avoid all of those debugging commits when something is configured differently between your local and remote setup.

You can execute a task like this:

$ fly -t example execute --config tests.yml

Your files will be uploaded and the task will be executed with them. The working directory name will be used as the input name. If they do not match, you must specify -i name=. instead, where name is the input name from the task configuration.

Fly will automatically capture SIGINT and SIGTERM and abort the build when received. This allows it to be transparently composed with other toolchains.

If you have many extra files or large files in your current directory that would normally be ignored by your version control system, then you can use the -x or --exclude-ignored flags in order to limit the files that you send to Concourse to just those that are not ignored.

If your task needs to run as root then you can specify the -p or --privileged flag.

Providing multiple inputs

Tasks in Concourse can take multiple inputs. Up until now we've just been submitting a single input (our current working directory) that has the same name as the directory.

Tasks must specify the inputs that they require (for more information, refer to the configuring tasks documentation). For fly to upload these inputs you can use the -i or --input arguments with name and path pairs. For example:

$ fly -t example execute --input code=. --input stemcells=../stemcells

This would work together with a task.yml if its inputs: section was as follows:

- name: code
- name: stemcells

If you specify an input then the default input will no longer be added automatically and you will need to explicitly list it (as with the code input above).

This feature can be used to mimic other resources and try out combinations of input that would normally not be possible in a pipeline.

Basing inputs on a job in your pipeline with --inputs-from

If the --inputs-from flag is given, the specified job will be looked up in the pipeline, and the one-off build will base its inputs on those currently configured for the job.

If any --input flags are given (see above), they will override the base set of inputs.

For example:

$ fly -t example execute --config task.yml --inputs-from main/integration --input foo=./foo

This will trigger a one-off-build using the task.yml task config, basing its inputs on the latest candidates for the integration job in the main pipeline, with the foo input overridden to specify local code to run.

This can be used to more closely replicate the state in CI when weeding out flakiness, or as a shortcut for local development so that you don't have to upload every single resource from your local machine.

Taking artifacts from the build with --output

If a task specifies outputs then you're able to extract these back out of the build and back to your local system. For example:

$ fly -t example execute --input code=. --config build-stemcell.yml --output stemcell=/tmp/stemcell

This would work together with a task.yml if its outputs: section was as follows:

- name: stemcell

This feature is useful to farm work out to your Concourse server to build things in a repeatable manner.

Providing values for params

Any params listed in the task configuration can be specified by using environment variables.

So, if you have a task with the following params:

  FOO: fizzbuzz

...and you run:

BAR=hello fly execute

The task would then run with BAR as "hello", and FOO as "fizzbuzz" (its default value).

Targeting a specific worker with --tag

If you want to execute a task on a worker that has a specific tag, you can do so by passing --tag:

fly -t example execute --config task.yml --tag bar

This will execute the task specified by task.yml on a worker that has been tagged bar.

pipelines: Listing configured pipelines

To list the currently-configured pipelines and their paused state, run:

$ fly -t example pipelines

rename-pipeline: Rename a pipeline

To rename a pipeline, run:

$ fly -t example rename-pipeline --old-name my-pipeline --new-name my-cool-pipeline

pause-pipeline: Preventing new pipeline activity

To pause a pipeline, run:

$ fly -t example pause-pipeline --pipeline my-pipeline

This will prevent jobs from being scheduled and stop the periodic checking for new versions of resources. Builds that are in-flight will still finish.

unpause-pipeline: Resuming pipeline activity

To unpause a pipeline, run:

$ fly -t example unpause-pipeline --pipeline my-pipeline

This will resume job scheduling and resource checking.

expose-pipeline: Making a pipeline publicly viewable

By default, newly configured pipelines are only visible to the pipeline's team. To make a pipeline viewable by other teams and unauthenticated users, run:

$ fly -t example expose-pipeline --pipeline my-pipeline

This feature is useful if you're using Concourse for an open source project and you'd like your community to be able to see into your build pipeline.

To undo this change, see hide-pipeline.

Exposing a pipeline reveals basically everything but the output of builds. The pipeline will be publicly viewable, as well as resource versions and metadata, which includes things like commit messages. Build output will remain hidden by default unless the job configures public: true.

hide-pipeline: Hiding a pipeline from the public

If you realize that you've made a terrible mistake in exposing your pipeline, you can run:

$ fly -t example hide-pipeline --pipeline my-pipeline

If you're panicking you can run the command's short form, hp, instead.

get-pipeline: Fetching a pipeline's configuration

Fly can be used to fetch and update the configuration for your pipelines. This is achieved by using the get-pipeline and set-pipeline commands. For example, to fetch the current configuration of your my-pipeline Concourse pipeline and print it on STDOUT run the following:

$ fly -t example get-pipeline --pipeline my-pipeline

To get JSON instead of YAML you can use the -j or --json argument. This can be useful when inspecting your config with jq.

set-pipeline: Configuring Pipelines

To submit a pipeline configuration to Concourse from a file on your local disk you can use the -c or --config flag, like so:

$ fly -t example set-pipeline --config pipeline.yml --pipeline my-pipeline

This will present a diff of the changes and ask you to confirm the changes. If you accept then Concourse's pipeline configuration will switch to the pipeline definition in the YAML file specified.


The pipeline configuration can contain template variables in the form of ((foo-bar)). They will be replaced with YAML values populated by repeated --var or --load-vars-from flags.

This allows for credentials to be extracted from a pipeline config, making it safe to check in to a public repository or pass around.

For example, if you have a pipeline.yml as follows:

- name: private-repo
  type: git
    uri: git@...
    branch: master
    private_key: ((private-repo-key)) could then configure this pipeline like so:

$ fly -t example set-pipeline --pipeline my-pipeline --config pipeline.yml --var "private-repo-key=$(cat id_rsa)"

Or, if you had a credentials.yml as follows:

private-repo-key: |
  -----END RSA PRIVATE KEY----- could configure it like so:

$ fly -t example set-pipeline --pipeline my-pipeline --config pipeline.yml --load-vars-from credentials.yml

If both --var and --load-vars-from are specified, the --var flags take precedence.

Values other than strings (e.g. bools, arrays) may also be specified via params. Concatenation is also supported in the form of foo-((bar)).

Combining with get-pipeline

You can use process substitution to quickly replicate a pipeline by composing get-pipeline and set-pipeline together like so:

$ fly -t new set-pipeline -c <(fly -t old get-pipeline -p foo) -p foo

While this replication method may be useful for migrating pipelines between concourse instances, note the following caveats:

get-pipeline fetches the current pipeline configuration with variables already interpolated. So you are working around any previous parameterization.

Replicating a pipeline, unmodified, risks resource contention. For example, you could end up with two pipelines pushing to the same S3 bucket.

If you're starting a new pipeline, we recommend you try to work through the above examples before using this one-step clone.

validate-pipeline: Validate a pipeline's configuration

To validate a local pipeline configuration without submitting it to Concourse, run validate-pipeline:

$ fly validate-pipeline --config pipeline.yml

By default, pipeline errors will cause validate-pipeline to fail, but warnings won't. To fail on both errors and warnings, pass the `--strict` flag.

destroy-pipeline: Removing Pipelines

Every now and then you just don't want a pipeline to be around anymore. Running fly destroy-pipeline will stop the pipeline activity and remove all data collected by the pipeline, including build history and collected versions.

For example, to destroy the my-pipeline pipeline, you would run:

$ fly -t example destroy-pipeline --pipeline my-pipeline

trigger-job: Triggering a new build of a job

To trigger a job, run:

$ fly -t example trigger-job --job my-pipeline/my-job

This will enqueue a new build of the my-job job in the my-pipeline pipeline.

To start watching the newly created build, append the --watch flag like so:

$ fly -t example trigger-job --job my-pipeline/my-job --watch

pause-job: Preventing new job activity

To pause a job, run:

$ fly -t example pause-job --job my-pipeline/my-job

This will prevent pending builds of the job from being scheduled, though builds that are in-flight will still run, and pending builds will still be created as normal.

unpause-job: Resuming job activity

To unpause a job, run:

$ fly -t example unpause-job --job my-pipeline/my-job

This will resume scheduling of builds queued for the job.

check-resource: Trigger discovery of new versions

To force immediate checking for new versions of a resource, rather than waiting for the periodic checking, run:

$ fly -t example check-resource --resource my-pipeline/my-resource

To check from a particular version, including the given version, append the --from flag like so:

$ fly -t example check-resource --resource my-pipeline/my-resource \
    --from ref:abcdef

This can be useful for collecting versions that are older than the current ones, given that a newly configured resource will only start from the latest version.

Note the ref: prefix is resource-dependent. For example, the bosh-io-release resource might use version:11.2 in place of ref:abcdef.

pause-resource: Prevent resource checks

To pause a resource, run:

$ fly -t example pause-resource --resource my-pipeline/my-resource

This will stop the resource from detecting new versions.

unpause-resource: Resume resource checks

To unpause a resource, run:

$ fly -t example unpause-resource --resource my-pipeline/my-resource

This will resume resource version detection.

builds: Showing build history

To list the most recent builds, run:

$ fly -t example builds

To list the builds of a job, run:

$ fly -t example builds -j pipeline-name/job-name

This can be useful for periodically monitoring the state of a job. The output also works well with tools like awk and grep.

By default the most recent 50 builds are shown. To see more builds, use the -c flag, like so:

$ fly -t example builds -c 100

abort-build: Aborting a running build of a job

To abort a build of a job, run:

$ fly -t example abort-build --job my-pipeline/my-job --build 3

This will cancel build 3 of the my-job job in the my-pipeline pipeline.

containers: Listing active containers

To list the active containers across all your workers, run:

$ fly -t example containers

This can be useful when discovering the containers available for intercepting.

intercept: Accessing a running or recent build's steps

Sometimes it's helpful to be on the same machine as your tasks so that you can profile or inspect them as they run or see the state the machine at the end of a run. Due to Concourse running tasks in containers on remote machines this would typically be hard to access.

To this end, there is a fly intercept command that will give you an interactive shell inside the specified container. Containers are identified by a few things, so you may need to specify a few flags to hone down the results. If there are multiple containers that the flags could refer to, an interactive prompt will show up allowing you to disambiguate.

The command fly hijack is an alias of fly intercept. Both can be used interchangably.

For example, running the following will run a task and then enter the finished task's container:

$ fly -t example execute
$ fly -t example intercept --step build

Be warned, if more than one person is using a Concourse server for running one-off builds then you may end up in a build that you did not expect!

Containers are around for a short time after a build finishes in order to allow people to intercept them.

You can also intercept builds that were run in your pipeline. By using --job, --build, and --step you can intercept a specific step from a build of a job in your pipeline. These flags also have short forms, like so:

$ fly -t example intercept -j some-pipeline/some-job -b some-build -s some-step

Note that --build can be omitted, and will default to the most recent build of the job. One-off builds can be reached by passing in their build ID to --build which can be found on the build list page.

The --step flag can also be omitted; this will let you pick the step interactively if you don't know the exact name.

Resource checking containers can also be intercepted with --check or -c:

$ fly -t example intercept --check some-pipeline/some-resource

A specific command can also be given, e.g. fly intercept ps auxf or fly intercept htop. This allows for patterns such as watch fly intercept ps auxf, which will continuously show the process tree of the current build's task, even as the "current build" changes.

The working directory and any relevant environment variables (e.g. those having come from params) used by the original process will also be used for the process run by intercept.

watch: View logs of in-progress builds

Concourse emits streaming colored logs on the website but it can be helpful to have the logs availiable to the command line. (e.g. so that they can be processed by other commands).

The watch command can be used to do just this. You can also view builds that are running in your pipeline, or builds that have already finished.

Note that unlike execute, killing fly watch via SIGINT or SIGTERM will not abort the build.

To watch the most recent one-off build, just run fly watch with no arguments. To watch a specific build (one-off or no), pass --build with the ID of the build to watch. This ID is available at the start of execute's output or by browsing to the builds list in the web UI.

By using the --job and --build flags you can pick out a specific build of a job to watch. For example, the following command will either show the archived logs for an old build if it has finished running or it will stream the current logs if the build is still in progress.

$ fly -t example watch --job my-pipeline/tests --build 52

If the --job flag is specified and --build is omitted, the most recent build of the specified job will be selected.

workers: Listing registered workers

To list the currently registered workers, including additional metadata, run:

$ fly -t example workers

This can be useful for monitoring the status of your workers, if you suspect that one keeps dropping out of the pool or getting tasked with too many containers, etc.

prune-worker: Reap a non-running worker

To remove a stalled, landing, landed, or retiring worker, run:

$ fly -t example prune-worker --worker worker-name

This is for those cases where you know a worker is not coming back. Note that running workers cannot be pruned, since they'll just re-register themselves anyway.

volumes: Listing active volumes

To list the active volumes across all your workers, run:

$ fly -t example volumes

This can be useful to observe the caches warming across your cluster, and could be a good indicator of disk use.

checklist: Generate Checkman definition files

Checkman supports retrieving build status from Concourse. If you would like to set this up yourself then there is documentation in the Checkman README about the syntax required. However, fly can generate the Checkman files based on your pipelines using the command below.

$ fly -t example checklist --pipeline my-pipeline > ~/Checkman/my-pipeline


The smallest configurable unit in a Concourse pipeline is a single task. A task can be thought of as a function from inputs to outputs that can either succeed or fail.

Going a bit further, ideally tasks are pure functions: given the same set of inputs, it should either always succeed with the same outputs or always fail. This is entirely up to your script's level of discipline, however. Flaky tests or dependencies on the internet are the most common source of impurity.

Once you have a running Concourse deployment, you can start configuring your tasks and executing them interactively from your terminal with the Fly commandline tool.

Once you've figured out your tasks's configuration, you can reuse it for a Job in your Pipeline.

Configuring a Task

Conventionally a task's configuration is placed in the same repository as the code it's testing, possibly under some ci directory. For a simple Ruby app with unit tests it may be called ci/unit.yml, and looks something like:


platform: linux

  type: docker-image
    repository: ruby
    tag: '2.1'

- name: my-app

  path: my-app/scripts/test

This configuration specifies that the task must run with the ruby:2.1 Docker image with a my-app input, and when the task is executed it will run the scripts/test script in the same repo.

A task's configuration specifies the following:

Required. The platform the task should run on. By convention, windows, linux, or darwin are specified. This determines the pool of workers that the task can run against. The base deployment provides Linux workers.

Optional. The base image of the container. This style of specifying the base image has the same effect as image: above but uses Concourse resources to download the image. The contents of this field should be the same as a resource configuration in your pipeline (without the name).

The following example configures the task to use the golang:1.6 Docker image:

  type: docker-image
  source: {repository: golang, tag: "1.6"}

...and the following example uses an insecure private Docker registry with a username and password:

  type: docker-image
    repository: my.local.registry:8080/my/image
    insecure_registries: ["my.local.registry:8080"]
    username: myuser
    password: mypass

You can use any resource that returns a filesystem in the correct format (a /rootfs directory and a metadata.json file in the top level) but normally this will be the Docker Image resource. If you'd like to make a resource of your own that supports this please use that as a reference implementation for now.

If you want to use an artifact source within the plan containing an image, you must set the image in the plan step instead.

Optional. A string specifying the rootfs uri of the container, as interpreted by your worker's Garden backend.

image_resource is a preferred way to specify base image and rootfs_uri is not recommended. With rootfs_uri image fetching is delegated to backend which does not guarantee image caching and might result in some permission errors. You should only use this if you cannot use image_resource for some reason, and you know what you're doing.

Optional. The set of artifacts required by task, determining which artifacts will be available in the current directory when the task runs.

These are satisfied by get steps or artifacts produced by outputs of a previous task in a job's build plan. These can also be provided by -i with execute.

This determines which artifacts will propagate into the task, as the build plan executes. If any specified inputs are not present, the task will end with an error, without running.

Each input has the following attributes:

Required. The logical name of the input.

Optional. The path where the input will be placed. If not specified, the input's name is used.

Paths are relative to the working directory of the task. Absolute paths are not respected.

Optional. The artifacts produced by the task.

Each output configures a directory to make available to later steps in the build plan. The directory will be automatically created before the task runs, and the task should place any artifacts it wants to export in the directory.

Each output has the following attributes:

Required. The logical name of the output. The contents under path will be made available to the rest of the plan under this name.

Optional. The path to a directory where the output will be taken from. If not specified, the output's name is used.

Paths are relative to the working directory of the task. Absolute paths are not respected.

Note that this value must not overlap with any other inputs or outputs. Each output results in a new empty directory that your task should place artifacts in; if the path overlaps it'll clobber whatever files used to be there.

For example, the following task and script would be used to propagate a built binary to later steps:

platform: linux

image_resource: # ...

- name: project-src

- name: built-project

  path: project-src/ci/build

...assuming project-src/ci/build looks something like:


set -e -u -x

export GOPATH=$PWD/project-src

go build -o built-project/my-project

...this task could then be used in a build plan like so:

- get: project-src
- task: build-bin
  file: project-src/ci/build.yml
- put: project-bin
  params: file: built-project/my-project

Optional. The cached directories shared between task runs.

On the task's first run, all cache directories will be empty. It is the responsibility of the task to populate these directories with any artifacts to be cached. On subsequent runs, the cached directories will contain those artifacts.

Caches are scoped to the worker the task is run on, so you will not get a cache hit when subsequent builds run on different workers. This also means that caching is not intended to share state between workers, and your task should be able to run whether or not the cache is warmed.

Caches are also scoped to a particular task name inside of a pipeline's job. As a consequence, if the job name, step name or cache path are changed, the cache will not be used. This also means that caches do not exist for one-off builds.

Each cache has the following attributes:

Required. The path to a directory to be cached.

Paths are relative to the working directory of the task. Absolute paths are not respected.

Note that this value must not overlap with any other caches in the same task. Each cache results in a new empty directory that your task can place artifacts in; if the path overlaps it'll clobber whatever files used to be there.

For example, the following task and script define a node project that takes advantage of task caches for its node modules:

platform: linux

image_resource: # ...

- name: project-src

- path: project-src/.node_modules

  path: project-src/ci/build

...assuming project-src/ci/build looks something like:


set -e -u -x

cd project-src
npm install

# ...

...this task would cache the contents of project-src/.node_modules between runs of this task on the same worker.

Required. The command to execute in the container.

Required. The command to execute, relative to the task's working directory. For a script living in a resource's repo, you must specify the full path to the resource, i.e. my-resource/scripts/test.

Paths are relative to the working directory of the task. Absolute paths are not respected.

Optional. Arguments to pass to the command. Note that when executed with Fly, any arguments passed to Fly are appended to this array.

Optional. A directory, relative to the initial working directory, to set as the working directory when running the script.

Optional. Explicitly set the user to run as. If not specified, this defaults to the user configured by the task's image. If not specified there, it's up to the Garden backend, and may be e.g. root on Linux.

Note that this is not provided as a script blob, but explicit path and args values; this allows fly to forward arguments to the script, and forces your config .yml to stay fairly small.

Optional. A key-value mapping of values that are exposed to the task via environment variables.

Use this to provide things like credentials, not to set up the task's Bash environment (they do not support interpolation).

Anatomy of a running task

A task runs in a new container every time, using the image provided by image_resource as its base filesystem (i.e. /).

The command specified by run will be executed in a working directory containing each of the inputs. If any inputs are missing the task will not run (and the container will not even be created).

The working directory will also contain empty directories for each of the outputs. The task must place artifacts in the output directories for them to be exported. This meshes well with build tools with configurable destination paths.

If your build tools don't support output paths you'll have to copy bits around. If it's a git repo that you're modifying you can do a local git clone ./input ./output, which is much more efficient than cp, and then work out of ./output.

Any params configured will be set in the environment for the task's command, along with any environment variables provided by the task's image (i.e. ENV rules from your Dockerfile).

The user the command runs as is determined by the image. If you're using the Docker Image resource, this will be the user set by a USER rule in your Dockerfile, or root if not specified.

Another relevant bit of configuration is privileged, which determines whether the user the task runs as will have full privileges (primarily when running as root). This is intentionally not configurable by the task itself, to prevent privilege escalation by way of pull requests to repositories containing task configs.

Putting all this together, the following task config:

platform: linux

  type: docker-image
    repository: golang
    tag: '1.6'

  SOME_PARAM: some-default-value

- name: some-input
- name: some-input-with-custom-path
  path: some/custom/path

- name: some-output

  path: sh
  - -exc
  - |
    go version
    find .
    touch some-output/my-built-artifact

...will produce the following output:

+ whoami
+ env
+ go version
go version go1.6 linux/amd64
+ find .
+ touch some-output/my-built-artifact

...and propagate my-built-artifact to any later tasks or puts that reference the some-output artifact, in the same way that this task had some-input as an input.

Running tasks with fly

Fly is a command-line tool that can be used to execute a task configuration against a Concourse deployment. This provides a fast feedback loop for iterating on the task configuration and your code.

For more information, see execute.


Together, jobs and resources form a pipeline.

Here's an example of a fairly standard unit → integration → deploy pipeline:

Above, the black boxes are resources, and the colored boxes are jobs, whose color indicates the status of their most recent build. It is also possible to group different sections of a pipeline together into logical collections.

A pipeline is configured with two sections: resources and jobs. For example, the configuration resulting in the above pipeline is as follows:

  - name: controller
    type: git
      branch: master

  - name: worker
    type: git
      branch: master

  - name: integration-suite
    type: git
      branch: master

  - name: release
    type: git
      branch: master

  - name: final-release
    type: s3
      bucket: concourse-releases
      regex: release-(.*).tgz

  - name: controller-mysql
      - get: controller
      - task: unit
        file: controller/ci/mysql.yml

  - name: controller-postgres
      - get: controller
      - task: unit
        file: controller/ci/postgres.yml

  - name: worker
      - get: worker
      - task: unit
        file: worker/task.yml

  - name: integration
      - aggregate:
          - get: integration-suite
          - get: controller
            passed: [controller-mysql, controller-postgres]
          - get: worker
            passed: [worker]
      - task: integration
        file: integration-suite/task.yml

  - name: deploy
    serial: true
      - aggregate:
          - get: release
          - get: controller
            passed: [integration]
          - get: worker
            passed: [integration]
      - task: deploy
        file: release/ci/deploy.yml
      - put: final-release
          from: deploy/release/build/*.tgz

To learn what the heck that means, read on.

resources: Objects flowing through the pipeline

Resources are the objects that are going to be used for jobs in the pipeline. They are listed under the resources key in the pipeline configuration.

The following example defines a resource representing Concourse's BOSH release repository:

- name: concourse
  type: git
    branch: master

Any time commits are pushed, the resource will detect them and save new versions of the resource. Any jobs immediately downstream of the resource will then be triggered with the latest version, potentially starting a chain of builds as it makes its way through the pipeline's jobs.

Resources can also be updated via a put step in the pipeline's jobs.

Each configured resource consists of the following attributes:

Required. The name of the resource. This should be short and simple. This name will be referenced by build plans of jobs in the pipeline.

Required. The type of the resource. Each worker advertises a mapping of resource-type -> container-image; type corresponds to the key in the map.

To see what resource types your deployment supports, check the status of your worker pool via the /api/v1/workers API endpoint.

Optional. The location of the resource. This varies by resource type, and is a black box to Concourse; it is blindly passed to the resource at runtime.

To use git as an example, the source may contain the repo URI, the branch of the repo to track, and a private key to use when pushing/pulling.

By convention, documentation for each resource type's configuration is in each implementation's README.

You can find the source for the resource types provided with Concourse at the Concourse GitHub organization.

Optional. Default 1m. The interval on which to check for new versions of the resource. Acceptable interval options are defined by the time.ParseDuration function.

Optional. Default []. A list of tags to determine which workers the checks will be performed on. You'll want to specify this if the source is internal to a worker's network, for example. See also tags on a step.

Optional. If specified, web hooks can be sent to trigger an immediate check of the resource, specifying this value as a primitive form of authentication via query params.

After configuring this value, you would then configure your hook sender with the following painfully long path appended to your external URL:


Note that the request payload sent to this API endpoint is entirely ignored. You should configure the resource as if you're not using web hooks, as the resource config is still the "source of truth."

resource_types: Additional resource types used by your pipeline

Each resource in a pipeline has a type. The resource's type determines what versions are detected, the bits that are fetched when used for a get step, and the side effect that occurs when used for a put step.

Out of the box, Concourse comes with a few resource types to cover common CI use cases like dealing with Git repositories and S3 buckets.

Beyond these core types, each pipeline can configure its own custom types by specifying resource_types at the top level. Each custom resource type is itself defined as a resource that provides the container image for the custom resource type (see Implementing a Resource). You will almost always be using the docker-image resource type when doing this.

The following example extends a Concourse pipeline to support use of the pull-request resource type and then uses it within the pipeline:

- name: pull-request
  type: docker-image
    repository: jtarchie/pr

- name: atomy-pr
  type: pull-request
    repo: vito/atomy
    access_token: ((access-token))

- name: atomy-pr-unit
  - get: atomy-pr
  - put: atomy-pr
      path: atomy-pr
      status: pending
  - task: unit
    file: atomy-pr/ci/unit.yml
      put: atomy-pr
        path: atomy-pr
        status: success
      put: atomy-pr
        path: atomy-pr
        status: failure

Custom resource types can override the core resource types, and can be defined in terms of each other. Also, a custom resource type can use the core type that it's overriding. This is useful if you want to e.g. provide your own custom docker-image resource, by overriding the core one (and using it one last time for the override itself), and then using it for all other custom resource types.

Similar to resources, each configured resource type consists of the following attributes:

Required. The name of the new resource type. This should be short and simple. This name will be referenced by resources defined within the same pipeline, and image_resources used by tasks running in the pipeline.

Required. The type of the resource used to provide the resource type's container image. Yes, this is a bit meta. Usually this will be docker-image, as the resource type must result in a container image, though there may be other image formats (possibly themselves defined as custom resource types!).

Optional. The location of the resource type's resource. This varies by resource type, and is a black box to Concourse; it is blindly passed to the resource at runtime.

To use docker-image as an example, the source would contain something like repository: username/reponame. See the Docker Image resource (or whatever resource type your resource type uses) for more information.

Optional. Default false. If set to true, the resource's containers will be run with full capabilities, as determined by the Garden backend the task runs on. For Linux-based backends it typically determines whether or not the container will run in a separate user namespace, and whether the root user is "actual" root (if set to true) or a user namespaced root (if set to false, the default).

This is a gaping security hole; only configure it if the resource type needs it (which should be called out in its documentation). This is not up to the resource type to decide dynamically, so as to prevent privilege escalation via third-party resource type exploits.

Optional. Default []. A list of tags to determine which workers the checks will be performed on. You'll want to specify this if the source is internal to a worker's network, for example. See also tags on a step.

jobs: Plans to execute against resources

Jobs determine the actions of your pipeline, how resources progress through it, and how everything is visualized. They are listed under the jobs key in the pipeline configuration.

The following example defines a simple unit-level job that will trigger whenever new code arrives at the concourse resource:

- name: atc-unit
  - get: concourse
  - task: unit
    file: concourse/ci/atc.yml

Each configured job consists of the following attributes:

Required. The name of the job. This should be short; it will show up in URLs.

Optional. Default false. If set to true, builds will queue up and execute one-by-one, rather than executing in parallel.

Optional. If configured, only the last specified number of builds will have their build logs persisted. This is useful if you have a job that runs periodically but after some amount of time the logs aren't worth keeping around.


- name: smoke-tests
  build_logs_to_retain: 100
  - get: 10m
  - task: smoke-tests
    # ...

Optional. Default []. When set to an array of arbitrary tag-like strings, builds of this job and other jobs referencing the same tags will be serialized.

This can be used to ensure that certain jobs do not run at the same time, like so:

- name: job-a
  serial_groups: [some-tag]
- name: job-b
  serial_groups: [some-tag, some-other-tag]
- name: job-c
  serial_groups: [some-other-tag]

In this example, job-a and job-c can run concurrently, but neither job can run builds at the same time as job-b.

The builds are executed in their order of creation, across all jobs with common tags.

Optional. If set, specifies a maximum number of builds to run at a time. If serial or serial_groups are set, they take precedence and force this value to be 1.

Optional. Default false. If set to true, the build log of this job will be viewable by unauthenticated users. Unauthenticated users will always be able to see the inputs, outputs, and build status history of a job. This is useful if you would like to expose your pipeline publicly without showing sensitive information in the build log.

Optional. Default false. If set to true, manual triggering of the job (via the web UI or trigger-job) will be disabled.

Optional. Default false. Normally, when a worker is shutting down it will wait for builds with containers running on that worker to finish before exiting. If this value is set to true, the worker will not wait on the builds of this job. You may want this if e.g. you have a self-deploying Concourse or long-running-but-low-importance jobs.

Optional. Step to execute when the job succeeds. Equivalent to the on_success step attribute.

Optional. Step to execute when the job fails. Equivalent to the on_failure step attribute.

Optional. Step to execute regardless of whether the job succeeds, fails, or errors. Equivalent to the ensure step attribute.

Required. The Build Plan to execute.

groups: Splitting up your pipeline into sections

A pipeline may optionally contain a section called groups. As more resources and jobs are added to a pipeline it can become difficult to navigate. Pipeline groups allow you to group jobs together under a header and have them show on different tabs in the user interface. Groups have no functional effect on your pipeline.

A simple grouping for the pipeline above may look like:

- name: tests
  - controller-mysql
  - controller-postgres
  - worker
  - integration
- name: deploy
  - deploy

This would display two tabs at the top of the home page: "tests" and "deploy". Once you have added groups to your pipeline then all jobs must be in a group otherwise they will not be visible.

For a real world example of how groups can be used to simplify navigation and provide logical grouping, see the groups used at the top of the page in the Concourse pipeline.

Each configured group consists of the following attributes:

Required. The name of the group. This should be short and simple as it will be used as the tab name for navigation.

Optional. A list of jobs that should appear in this group. A job may appear in multiple groups. Neighbours of jobs in the current group will also appear on the same page in order to give context of the location of the group in the pipeline.

Optional. A list of resources that should appear in this group. Resources that are inputs or outputs of jobs in the group are automatically added; they do not have to be explicitly listed here.

Build Plans

Each Job has a single build plan. When a build of a job is created, the plan determines what happens.

A build plan is a sequence of steps to execute. These steps may fetch down or update Resources, or execute Tasks.

A new build of the job is scheduled whenever get steps with trigger: true have new versions available.

To visualize the job in the pipeline, resources that appear as get steps are drawn as inputs, and resources that appear in put steps appear as outputs.

A simple unit test job may look something like:

name: banana-unit
- get: banana
  trigger: true
- task: unit
  file: banana/task.yml

This job says: get the banana resource, and run a task step called unit, using the configuration from the task.yml file fetched from the banana step.

When new versions of banana are detected, a new build of banana-unit will be scheduled, because we've set trigger: true.

Jobs can depend on resources that are produced by or pass through upstream jobs, by configuring passed: [job-a, job-b] on the get step.

Putting these pieces together, if we were to propagate banana from the above example into an integration suite with another apple component (pretending we also defined its apple-unit job), the configuration for the integration job may look something like:

name: fruit-basket-integration
- aggregate:
  - get: banana
    trigger: true
    passed: [banana-unit]
  - get: apple
    trigger: true
    passed: [apple-unit]
  - get: integration-suite
    trigger: true
- task: integration
  file: integration-suite/task.yml

Note the use of the aggregate step to collect multiple inputs at once.

With this example we've configured a tiny pipeline that will automatically run unit tests for two components, and continuously run integration tests against whichever versions pass both unit tests.

This can be further chained into later "stages" of your pipeline; for example, you may want to continuously deliver an artifact built from whichever components pass fruit-basket-integration.

To push artifacts, you would use a put step that targets the destination resource. For example:

name: deliver-food
- aggregate:
  - get: banana
    trigger: true
    passed: [fruit-basket-integration]
  - get: apple
    trigger: true
    passed: [fruit-basket-integration]
  - get: baggy
    trigger: true
- task: shrink-wrap
  file: baggy/shrink-wrap.yml
- put: bagged-food
    bag: shrink-wrap/bagged.tgz

This presumes that there's a bagged-food resource defined, which understands that the bag parameter points to a file to ship up to the resource's location.

Note that both banana and apple list the same job as an upstream dependency. This guarantees that deliver-food will only trigger when a version of both of these dependencies pass through the same build of the integration job (and transitively, their individual unit jobs). This prevents bad apples or bruised bananas from being delivered. (I'm sorry.)

For a reference on each type of step, read on.

get: fetch a resource

Fetches a resource, making it available to subsequent steps via the given name.

For example, the following plan fetches a version number via the semver resource, bumps it to the next release candidate, and puts it back.

- get: version
    bump: minor
    rc: true
- put: version
    version: version/number

Required. The name of the resource once it is fetched. This name satisfies logical inputs to a Task, and may be referenced within the plan itself (e.g. in the file attribute of a task step).

This is also the name of the resource to fetch, if resource is not set.

Optional. Defaults to get, the name. The resource to fetch, as configured in resources.

Use this attribute to rename a resource from the overall pipeline context into the job-specific context.

Optional. Defaults to latest. The version of the resource to fetch.

If set to latest, scheduling will just find the latest available version of a resource and use it, allowing versions to be skipped. This is usually what you want, e.g. if someone pushes 100 git commits.

If set to every, builds will walk through all available versions of the resource. Note that if passed is also configured, it will only step through the versions satisfying the constraints.

If set to a specific version (e.g. {ref: abcdef123}), only that version will be used. Note that the version must be available and detected by the resource, otherwise the input will never be satisfied. You may want to use check-resource to force detection of resource versions, if you need to use an older one that was never detected (as all newly configured resources start from the latest version).

Optional. When specified, only the versions of the resource that made it through the given list of jobs will be considered when triggering and fetching.

Note that if multiple gets are configured with passed constraints, all of the mentioned jobs are correlated. That is, with the following set of inputs:

- get: a
  passed: [a-unit, integration]
- get: b
  passed: [b-unit, integration]
- get: x
  passed: [integration]

This means "give me the versions of a, b, and x that have passed the same build of integration, with the same version of a passing a-unit and the same version of b passing b-unit."

This is crucial to being able to implement safe "fan-in" semantics as things progress through a pipeline.

Optional. A map of arbitrary configuration to forward to the resource. Refer to the resource type's documentation to see what it supports.

Optional. Default false. Set to true to auto-trigger new builds of the plan's job whenever this step has new versions available, as specified by the resource and any passed constraints.

Otherwise, if no get steps set this to true, the job can only be manually triggered.

put: update a resource

Pushes to the given Resource. All artifacts collected during the plan's execution will be available in the working directory.

For example, the following plan fetches a repo using get and pushes it to another repo (assuming repo-develop and repo-master are defined as git resources):

- get: repo-develop
- put: repo-master
    repository: repo-develop

When the put succeeds, the produced version of the resource will be immediately fetched via an implicit get step. This is so that later steps in your plan can use the artifact that was produced. The source will be available under whatever name put specifies, just like as with get.

So, if the logical name (whatever put specifies) differs from the concrete resource, you would specify resource as well, like so:

- put: resource-image
  resource: docker-image-resource

Additionally, you can control the settings of the implicit get step by setting get_params. For example, if you did not want a put step utilizing the docker-image resource type to download the image, you would implement your put step as such:

- put: docker-build
  params: {build: git-resource}
  get_params: {skip_download: true}

Required. The logical name of the resource being pushed. The pushed resource will be available under this name after the push succeeds.

Optional. Defaults to name. The resource to update, as configured in resources.

Optional. A map of arbitrary configuration to forward to the resource. Refer to the resource type's documentation to see what it supports.

Optional. A map of arbitrary configuration to forward to the resource that will be utilized during the implicit get step. Refer to the resource type's documentation to see what it supports.

task: execute a task

Executes a Task, either from a file fetched via the preceding steps, or with inlined configuration.

If any task in the build plan fails, the build will complete with failure. By default, any subsequent steps will not be performed. You can perform additional steps after failure by adding a on_failure or ensure step.

For example, the following plan fetches a single repository and executes multiple tasks, using the aggregate step, in a build matrix style configuration:

- get: my-repo
- aggregate:
  - task: go-1.3
    file: my-repo/go-1.3.yml
  - task: go-1.4
    file: my-repo/ci/go-1.4.yml

Only if both tasks succeed will the build go green.

When a task completes, the files in its declared outputs will be made avaliable to subsequent steps. This allows those subsequent steps to process the result of a task. For example, the following plan pulls down a repo, makes a commit to it, and pushes the commit to another repo (the task must have an output called repo-with-commit):

- get: my-repo
- task: commit
  file: my-repo/commit.yml
- put: other-repo
    repository: repo-with-commit

Required. A freeform name for the task that's being executed. Common examples would be unit or integration.

One required. The configuration for the task's running environment.

file points at a .yml file containing the task config, which allows this to be tracked with your resources.

The first segment in the path should refer to another source from the plan, and the rest of the path is relative to that source.

For example, if in your plan you have the following get step:

- get: something

And the something resource provided a unit.yml file, you would set file: something/unit.yml.

config can be defined to inline the task config statically.

Optional. Default false. If set to true, the task will run with full capabilities, as determined by the Garden backend the task runs on. For Linux-based backends it typically determines whether or not the container will run in a separate user namespace, and whether the root user is "actual" root (if set to true) or a user namespaced root (if set to false, the default).

This is a gaping security hole; use wisely and only if necessary. This is not part of the task configuration to prevent privilege escalation via pull requests.

Optional. A map of task parameters to set, overriding those configured in config or file. This is useful for passing in credentials or other configuration to the task from the pipeline.

For example:

- get: my-repo
- task: integration
  file: my-repo/ci/integration.yml
    USERNAME: my-user
    PASSWORD: my-pass

This is often used in combination with ((parameters)) in the pipeline.

Optional. Names an artifact source within the plan containing an image to use for the task. This overrides any image_resource configuration present in the task configuration.

This is very useful when part of your pipeline involves building an image, possibly with dependencies pre-baked. You can then propagate that image through the rest of your pipeline, guaranteeing that the correct version (and thus a consistent set of dependencies) is used throughout your pipeline.

For example, here's a pipeline building an image in one job and propagating it to the next:

- name: my-project
  type: git
  source: {uri:}

- name: my-task-image
  type: docker-image
  source: {repository: my-user/my-repo}

- name: build-task-image
  - get: my-project
  - put: my-task-image
    params: {build: my-project/ci/images/my-task}

- name: use-task-image
  - get: my-task-image
    passed: [build-task-image]
  - get: my-project
    passed: [build-task-image]
  - task: use-task-image
    image: my-task-image
    file: my-project/ci/tasks/my-task.yml

This can also be used in the simpler case of explicitly keeping track of dependent images, in which case you just wouldn't have a job building it (build-task-image in the above example).

Optional. A map from task input names to concrete names in the build plan. This allows a task with generic input names to be used multiple times in the same plan, mapping its inputs to specific resources within the plan.

For example:

- get: diego-release
- get: cf-release
- get: ci-scripts
- task: audit-diego-release
  file: ci-scripts/audit-release.yml
  input_mapping: {release-repo: diego-release}
- task: audit-cf-release
  file: ci-scripts/audit-release.yml
  input_mapping: {release-repo: cf-release}

Optional. A map from task output names to concrete names to register in the build plan. This allows a task with generic output names to be used multiple times in the same plan.

This is often used together with input_mapping. For example:

- get: diego-release
- get: cf-release
- get: ci-scripts
- task: create-diego-release
  file: ci-scripts/create-release.yml
  input_mapping: {release-repo: diego-release}
  output_mapping: {release-tarball: diego-release-tarball}
- task: create-cf-release
  file: ci-scripts/create-release.yml
  input_mapping: {release-repo: cf-release}
  output_mapping: {release-tarball: cf-release-tarball}

aggregate: run steps in parallel

Performs the given steps in parallel.

If any sub-steps in an aggregate result in an error, the aggregate step as a whole is considered to have errored.

Similarly, when aggregating task steps, if any fail, the aggregate step will fail. This is useful for build matrixes:

- get: some-repo
- aggregate:
  - task: unit-windows
    file: some-repo/ci/windows.yml
  - task: unit-linux
    file: some-repo/ci/linux.yml
  - task: unit-darwin
    file: some-repo/ci/darwin.yml

The aggregate step is also useful for performing arbitrary steps in parallel, for the sake of speeding up the build. It is often used to fetch all dependent resources together:

- aggregate:
  - get: component-a
  - get: component-b
  - get: integration-suite
- task: integration
  file: integration-suite/task.yml

do: run steps in series

Simply performs the given steps serially, with the same semantics as if they were at the top level step listing.

This can be used to perform multiple steps serially in the branch of an aggregate step:

- aggregate:
  - task: unit
  - do:
    - get: something-else
    - task: something-else-unit

on_success: perform a step when another succeeds

Any step can have on_success tacked onto it, whose value is a second step to execute only if the parent step succeeds.

The step to execute when the parent step succeeds. If the attached step fails, the outer step is considered to have failed.

The following will perform the second task only if the first one succeeds:

- get: foo
- task: unit
  file: foo/unit.yml
    task: alert
    file: foo/alert.yml

Note that this is semantically equivalent to the following:

- get: foo
- task: unit
  file: foo/unit.yml
- task: alert
  file: foo/alert.yml

...however it is provided mainly for cases where there is an equivalent on_failure, and having them next to each other is more clear.

on_failure: perform a step when another fails

Any step can have on_failure tacked onto it, whose value is a second step to execute only if the parent step fails.

The step to execute when the parent step fails. If the attached step succeeds, the entire step is still failed.

The following will perform the attached task only if the first one fails:

- get: foo
- task: unit
  file: foo/unit.yml
    task: alert
    file: foo/alert.yml

ensure: guarantee execution of a step

Any step can have ensure tacked onto it, whose value is a second step to execute regardless of the result of the parent step.

The step to execute. Regardless of whether the parent step succeeds, fails, or errors, this step will be executed. The step will also be executed if the build was aborted, and its parent step was interrupted.

If the parent step succeeds and the ensured step fails, the parent step is considered to have failed.

The ensured step executes after any on_success or on_failure hooks.

For example, the following build plan acquires a lock, and then ensures that the lock is released.

- put: some-lock
  params: acquire: true
- task: integration
  file: foo/integration.yml
    put: some-lock
    params: release: some-lock

tags: perform a step on certain workers

Any step can be directed at a pool of workers for a given set of tags, by adding the tags attribute to it.

Optional. Default []. The tags by which to match workers.

For example, if [a, b] is specified, only workers advertising the a and b tags (or any others) will be used for running the step.

try: permit failure of a step

Performs the given step, swallowing any failure.

This can be used when you want to perform some side-effect, but you don't really want the whole build to fail if it doesn't work. For example, when emitting logs to S3 for analyzing later, if S3 flakes out it's not too critical.

- task: run-tests
  config: # ...
      put: test-logs
        from: run-tests/*.log
- task: do-something-else
  config: # ...

timeout: enforce a time limit on a step

Any step can have a hard time limit enforced by attaching timeout and the number of seconds to limit it to.

The amount of time to limit the step's execution to, e.g. 30m for 30 minutes.

When exceeded, the step will be interrupted, with the same semantics as aborting the build (except the build will be failed, not aborted, to distinguish between human intervention and timeouts being inforced).

The following will run the task, and cancel it if it takes longer than 1 hour and 30 minutes:

- get: foo
- task: unit
  file: foo/unit.yml
  timeout: 1h30m

attempts: retry a step

Any step can set the number of times it should be attempted by attaching an attempts parameter with the number of times it should be tried.

The total number of times a step should be tried should it fail, e.g. 5 will try the step up to 5 times before giving up.

When the number of attempts is reached and the step has still not succeeded then the step will fail.

Attempts will retry on a Concourse error as well as build failure.

The following will run the task, and retry it 9 times (for a total of 10 attempts) if it fails:

- get: foo
- task: unit
  file: foo/unit.yml
  attempts: 10

Resource Types

The following is a list of both officially supported resource types that Concourse includes out of the box, and third-party resource types contributed by the community. At some point we'd like to put together some sort of registry. We're not there yet though, so a list will have to do.

Be sure to read the README file for a resource to know what it does before using it!

Resources Provided With Concourse

The following resource types come with Concourse out of the box, and are officially supported by the Concourse team:

  • The git resource can pull and push to git repositories.

  • The hg resource can pull and push to Mercurial repositories.

  • The time resource can start jobs on a schedule or timestamp outputs.

  • The s3 resource can fetch from and upload to S3 buckets.

  • The archive resource can fetch and extract .tar.gz archives.

  • The semver resource can set or bump version numbers.

  • The github-release resource can fetch and publish versioned GitHub resources.

  • The docker-image resource can fetch, build, and push Docker images

  • The tracker resource can deliver stories and bugs on Pivotal Tracker

  • The pool resource allows you to configure how to serialize use of an external system. This lets you prevent test interference or overwork on shared systems.

  • The cf resource can deploy an application to Cloud Foundry.

  • The bosh-io-release resource can track and fetch new BOSH releases from

  • The bosh-io-stemcell resource can track and fetch new BOSH stemcells from

  • The vagrant-cloud resource can fetch and publish Vagrant boxes to Atlas.

Deprecated Resources Provided With Concourse

The following resource types come with Concourse out of the box, but are deprecated:

Multiple Teams

Every pipeline and one-off build belongs to a team.

A team is simply a conceptual owner and a separate namespace for data in Concourse (pipelines, builds, etc.) with its own auth configuration. A team may map to GitHub oAuth, basic auth, UAA auth, or whatever other auth methods we add in the future.

Importantly, a team does not have "users" added to and removed from it; it's only a configuration of auth methods, which themselves may provide user info that we add semantics to in the future (i.e. which GitHub user you authorized as).

main: the magical mystery team

Out of the box, Concourse comes with a single team called main.

The main team is an admin team, meaning it can create and update other teams. Currently there is no way to promote a team to become an admin team, so main is a special-case.

The main team is configured as part of the deployment itself as flags passed to the ATC. Changes to these flags take effect whenever the ATC starts up. This is so that you can't get locked out, and so that you can have Concourse sanely configured as soon as it comes up.

Consult either web --help for a concourse binary deployment or for a BOSH deployment to configure the main team.

Creating Teams

Once you've logged in as the main team with fly, you can run set-team to create or update other teams:

$ fly set-team -n my-team \
    --basic-auth-username ci \
    --basic-auth-password changeme

Once the team has been created, you can use login to log in:

$ fly login -n my-team

Any newly configured pipelines (via set-pipeline) and one-off builds (via execute) will be owned by the authorized team. Commands that list content will be scoped to the current team by default, such as pipelines and builds. The web UI should reflect the same state.

Newly configured pipelines are hidden by default, meaning other teams and unauthorized visitors cannot view them. To make them publicly viewable, see Pipeline and build visibility.

Configuring Auth

Continuous Integration servers often contain a considerable number of secrets to let them access source code and deploy applications. It is important that those secrets remain well guarded. Concourse provides options for both authentication and authorization to give you control over who can access your server and how much they can see.

If you access your Concourse server over the public internet then all of the options below are useless without using TLS to secure your connection.

The authentication methods for teams are determined by flags passed to set-team, except for the main team, which is configured as part of the deployment.

Any number of the following providers can be enabled at any one time. Users will be given a choice when logging in as to which one they would like to use.

Basic Authentication

HTTP Basic authentication is the simplest of the authentication mechanisms. It has good support in both browsers and command line tools. It provides a single set of credentials for all users of the system.

You can configure basic auth for a team via the --basic-auth-username and --basic-auth-password flags:

$ fly set-team -n my-team \
    --basic-auth-username foo \
    --basic-auth-password bar

GitHub oAuth

A Concourse server can authenticate against GitHub to take advantage of their permission model and other security improvements in their infrastructure. There are a number of steps you need to take to enable this.

Configuring the GitHub oAuth for our own team of engineers at Pivotal may look something like:

$ fly set-team -n concourse \
    --github-auth-client-id $CLIENT_ID \
    --github-auth-client-secret $CLIENT_SECRET \
    --github-auth-team concourse/Pivotal

Configuring the callback URL

First you need to create an OAuth application on GitHub.

The name, home page, and description don't matter but should be set to something that lets users trust their origin. The Authorization callback URL should be the external URL of your Concourse server with /auth/github/callback appended. For example, Concourse's own CI server's callback URL is

Configuring the Client

You will be given a Client ID and a Client Secret for your new application. These will then be passed to the following flags:

  • --github-auth-client-id=CLIENT_ID

  • --github-auth-client-secret=CLIENT_SECRET

If you're configuring GitHub Enterprise, you'll also need to set the following flags:

  • --github-auth-auth-url=

  • --github-auth-token-url=

  • --github-auth-api-url=

Make sure the API URL ends with a slash!

Authorizing GitHub users, teams, and organizations

You're now able to set the organizations, teams, and individual users who should have access to your server.

This is done by passing the following flags:


Authorize an individual user.

--github-auth-team=ORG/TEAM NAME

Authorize an team's members within an organization.


Authorize an entire organization's members.

These flags can be specified multiple times to permit multiple levels of access.

UAA/CF oAuth

The --uaa-auth-* flags allow you to authorize members of a particular space in a CF deployment. In the future we'll make this generic to UAA and request scopes.

Configuring the UAA client

You'll first need to configure a client for Concourse with your UAA. The callback URL will be the external URL of your Concourse server with /auth/oauth/callback appended. For example, Concourse's own CI server's callback URL would be

The client should look something like this, under uaa.clients:

  id: my-client-id
  secret: my-client-secret
  authorities: cloud_controller.admin
  authorized-grant-types: "authorization_code,client_credentials,refresh_token"
  access-token-validity: 3600
  refresh-token-validity: 3600
  autoapprove: true
  override: true

Configuring the team

Configuring auth for my team's space in a CF installation may look something like:

$ fly set-team -n my-cf-team \
    --uaa-auth-client-id $CLIENT_ID \
    --uaa-auth-client-secret $CLIENT_SECRET \
    --uaa-auth-auth-url \
    --uaa-auth-token-url \
    --uaa-auth-cf-url \
    --uaa-auth-cf-ca-cert ./path/to/cf-ca-cert.crt \
    --uaa-auth-cf-space 31d89184-35b8-4660-a8ab-a7f72cc24435

The 31d89184-35b8-4660-a8ab-a7f72cc24435 value above is the GUID of the space whose members will be authorized.

Generic oAuth

The --generic-oauth-* flags configure a generic oAuth provider which performs no additional verification about the individual user signing in by default. It should only be used with internal auth systems in this way. If it were used to configure Google or Twitter oAuth, for example, it would permit just about every person on the internet to create pipelines. It'd be mighty generous. If you need verification, make sure you are using the --generic-oauth-scope flag.

Configuring the oAuth client

You'll first need to configure the client with your oAuth provider. The callback URL will be the external URL of your Concourse server with /auth/oauth/callback appended. For example, Concourse's own CI server's callback URL would be

Configuring the team

Configuring generic oAuth for X company's internal oAuth service may look something like:

$ fly set-team -n my-team-in-x \
    --generic-oauth-display-name 'X' \
    --generic-oauth-client-id $CLIENT_ID \
    --generic-oauth-client-secret $CLIENT_SECRET \
    --generic-oauth-auth-url \

You can also pass arbitrary params, for example a scope to request, like so:

$ fly set-team -n my-team-in-x \
    --generic-oauth-display-name 'X' \
    --generic-oauth-client-id $CLIENT_ID \
    --generic-oauth-client-secret $CLIENT_SECRET \
    --generic-oauth-auth-url \
    --generic-oauth-token-url \
    --generic-oauth-auth-url-param scope:read

The --generic-oauth-auth-url-param flag can be specified multiple times to configure multiple params, in the form of KEY:VALUE.

If you need verification of users, you can choose to require users have a particular scope with --generic-oauth-scope:

$ fly set-team -n my-team-in-x \
    --generic-oauth-display-name 'X' \
    --generic-oauth-client-id $CLIENT_ID \
    --generic-oauth-client-secret $CLIENT_SECRET \
    --generic-oauth-auth-url \
    --generic-oauth-token-url \
    --generic-oauth-scope concourse.main


Given that this is a catch-all provider, there may be things that go wrong that we can't really document in great detail. If things aren't working, try flailing until it works.

For example, some providers seem to expect values like client_id and client_secret to be specified as query params on the token URL. Our oAuth2 library sends these values as the request payload POSTed to the endpoint instead. In this case you could try tacking the query params on to the end of the token URL like so:

$ fly set-team -n my-team-in-x \
    --generic-oauth-display-name 'X' \
    --generic-oauth-client-id $CLIENT_ID \
    --generic-oauth-client-secret $CLIENT_SECRET \
    --generic-oauth-auth-url \
    --generic-oauth-token-url \
      "$CLIENT_SECRET" \
    --generic-oauth-auth-url-param scope:some-scope

Pretty janky but it works.

Future providers

We've written the authentication layer of Concourse to be easily extendable. Adding a new OAuth provider is as simple as adding a few interface implementations and feeding them any information they need from the command line.

We'd be interested in hearing about any providers with which you'd like to integrate.

Pipeline and build visibility

Every newly configured pipeline is hidden to anyone but the pipeline's team. To make a pipeline publicly viewable, both by other teams and unauthenticated users, see expose-pipeline.

Even with a pipeline exposed, all build logs are hidden by default. This is because CI jobs are prone to leaking credentials and other...unsavory information. After you've determined that a job's builds should be safe for public consumption, you can set public: true on the job in your pipeline.


At present, teams only provide trusted multi-tenancy. This means it should be used for cases where you know and trust who you're allowing access into your Concourse cluster.

There are a few reasons it'd be a bad idea to do otherwise:

  • Any team can run privileged builds. A bad actor in the mix could easily use this to harm your workers and your cluster.

    In the future, we'll probably have this as a flag on a team, indicating whether they're permitted to run privileged builds.

  • There are no networking restrictions in place, and traffic to and from the workers is currently unencrypted and unauthorized. Anyone could run a task that does horrible things to your worker's containers, possibly stealing sensitive information.

    This can be remedied with configuration specified on Garden to restrict access to the internal network, but this is not detailed in our docs, and we'll probably want to find a better answer than configuration in the future.

Implementing a Resource

A resource type is implemented by a container image with three scripts:

  • /opt/resource/check for checking for new versions of the resource

  • /opt/resource/in for pulling a version of the resource down

  • /opt/resource/out for idempotently pushing a version up

Distributing resource types as containers allows them to package their own dependencies. For example, the Git resource comes with git installed.

All resources must implement all three actions, though the actions can just be no-ops (which still must be correctly implemented as detailed below).

Resources can emit logs to the user by writing to stderr. ANSI escape codes (coloring, cursor movement, etc.) will be interpreted properly by the web UI, so you should make your output pretty.

check: Check for new versions.

A resource type's check script is invoked to detect new versions of the resource. It is given the configured source and current version on stdin, and must print the array of new versions, in chronological order, to stdout, including the requested version if it's still valid.

The request body will have the following fields:

  • source is an arbitrary JSON object which specifies the location of the resource, including any credentials. This is passed verbatim from the pipeline configuration.

    For git this would be the repo URI, which branch, and a private key if necessary.

  • version is a JSON object with string fields, used to uniquely identify an instance of the resource. For git this would be a commit SHA.

    This will be omitted from the first request, in which case the resource should return the current version (not every version since the resource's inception).

For example, here's what the input for a git resource may look like:

  "source": {
    "uri": "git://some-uri",
    "branch": "develop",
    "private_key": "..."
  "version": { "ref": "61cebf" }

Upon receiving this payload the git resource would probably do something like:

[ -d /tmp/repo ] || git clone git://some-uri /tmp/repo
cd /tmp/repo
git pull && git log 61cbef..HEAD

Note that it conditionally clones; the container for checking versions is reused between checks, so that it can efficiently pull rather than cloning every time.

And the output, assuming d74e01 is the commit immediately after 61cbef:

  { "ref": "61cbef" },
  { "ref": "d74e01" },
  { "ref": "7154fe" }

The list may be empty, if there are no versions available at the source. If the given version is already the latest, an array with that version as the sole entry should be listed.

If your resource is unable to determine which versions are newer then the given version (e.g. if it's a git commit that was push -fed over), then the current version of your resource should be returned (i.e. the new HEAD).

in: Fetch a given resource.

The in script is passed a destination directory as $1, and is given on stdin the configured source and a precise version of the resource to fetch.

The script must fetch the resource and place it in the given directory.

If the desired resource version is unavailable (for example, if it was deleted), the script must error.

The script must emit the fetched version, and may emit metadata as a list of key-value pairs. This data is intended for public consumption and will make it upstream, intended to be shown on the build's page.

The request will contain the following fields:

  • source is the same value as passed to check.

  • version is the same type of value passed to check, and specifies the version to fetch.

  • params is an arbitrary JSON object passed along verbatim from params on a get.

Example request, in this case for the git resource:

  "source": {
    "uri": "git://some-uri",
    "branch": "develop",
    "private_key": "..."
  "version": { "ref": "61cebf" }

Upon receiving this payload the git resource would probably do something like:

git clone --branch develop git://some-uri $1
cd $1
git checkout 61cebf

And output:

  "version": { "ref": "61cebf" },
  "metadata": [
    { "name": "commit", "value": "61cebf" },
    { "name": "author", "value": "Hulk Hogan" }

out: Update a resource.

The out script is called with a path to the directory containing the build's full set of sources as the first argument, and is given on stdin the configured params and the resource's source configuration.

The script must emit the resulting version of the resource. For example, the git resource emits the sha of the commit that it just pushed.

Additionally, the script may emit metadata as a list of key-value pairs. This data is intended for public consumption and will make it upstream, intended to be shown on the build's page.

The request will contain the following fields:

  • source is the same value as passed to check.

  • params is an arbitrary JSON object passed along verbatim from params on a put.

Example request, in this case for the git resource:

  "params": {
    "branch": "develop",
    "repo": "some-repo"
  "source": {
    "uri": "git@...",
    "private_key": "..."

Upon receiving this payload the git resource would probably do something like:

cd $1/some-repo
git push origin develop

And output:

  "version": { "ref": "61cebf" },
  "metadata": [
    { "name": "commit", "value": "61cebf" },
    { "name": "author", "value": "Mick Foley" }


When used in a get or a put, metadata about the running build is made available via the following environment variables:


The internal identifier for the build. Right now this is numeric but it may become a guid in the future. Treat it as an absolute reference to the build.


The build number within the build's job.


The name of the build's job.


The pipeline that the build's job lives in.


The team that the build belongs to.


The public URL for your ATC; useful for debugging.

If the build is a one-off, $BUILD_NAME, $BUILD_JOB_NAME, and $BUILD_PIPELINE_NAME will not be set.

None of these variables are available to /check.

These variables should be used solely for annotating things with metadata for traceability, i.e. for linking to the build in an alert or annotating an automated commit so its origin can be discovered.

They should not be used to emulate versioning (e.g. by using the increasing build number). They are not provided to task steps to avoid this anti-pattern.