Standalone Binary

At some point you may want to start putting Concourse on to real hardware. A binary distribution is available in the downloads section.

The binary is fairly self-contained, making it ideal for tossing onto a VM by hand or orchestrating it with Docker, Chef, or other ops tooling.

Prerequisites

  • Grab the appropriate binary for your platform from the downloads section.

  • For Linux you'll need kernel v3.19 or later, with user namespace support enabled. Windows and Darwin don't really need anything special.

  • PostgresSQL 9.3+

Generating Keys

To run Concourse securely you'll need to generate 3 private keys (well, 2, plus 1 for each worker):

session_signing_key (currently must be RSA)

Used for signing user session tokens, and by the TSA to sign its own tokens in the requests it makes to the ATC.

tsa_host_key

Used for the TSA's SSH server. This is the key whose fingerprint you see when the ssh command warns you when connecting to a host it hasn't seen before.

worker_key (one per worker)

Used for authorizing worker registration. There can actually be an arbitrary number of these keys; they are just listed to authorize worker SSH access.

To generate these keys, run:

ssh-keygen -t rsa -f tsa_host_key -N ''
ssh-keygen -t rsa -f worker_key -N ''
ssh-keygen -t rsa -f session_signing_key -N ''

...and we'll also start on an authorized_keys file, currently listing this initial worker key:

cp worker_key.pub authorized_worker_keys

Starting the Web UI & Scheduler

The concourse binary embeds the ATC and TSA components, available as the web subcommand.

The ATC is the component responsible for scheduling builds, and also serves as the web UI and API.

The TSA provides a SSH interface for securely registering workers, even if they live in their own private network.

Single node, local Postgres

The following command will spin up the ATC, listening on port 8080, with some basic auth configured, and a TSA listening on port 2222.

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --external-url http://my-ci.example.com

This assumes you have a local Postgres server running on the default port (5432) with an atc database, accessible by the current user. If your database lives elsewhere, just specify the --postgres-data-source flag, which is also demonstrated below.

Be sure to replace the --external-url flag with the URI you expect to use to reach your Concourse server.

In the above example we've configured basic auth for the main team. For further configuration see Configuring Auth.

Cluster with remote Postgres

The ATC can be scaled up for high availability, and they'll also roughly share their scheduling workloads, using the database to synchronize.

The TSA can also be scaled up, and requires no database as there's no state to synchronize (it just talks to the ATC).

A typical configuration with multiple ATC+TSA nodes would have them sitting behind a load balancer, forwarding port 80 to 8080, 443 to 4443 (if you've enabled TLS), and 2222 to 2222.

To run multiple web nodes, you'll need to pass the following flags:

  • --postgres-data-source should all refer to the same database

  • --peer-url should be a URL used to reach the individual ATC, from other ATCs, i.e. a URL usable within their private network

  • --external-url should be the URL used to reach any ATC, i.e. the URL to your load balancer

For example:

Node 0:

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --postgres-data-source postgres://user:pass@10.0.32.0/concourse \
  --external-url https://ci.example.com \
  --peer-url http://10.0.16.10:8080

Node 1 (only difference is --peer-url):

concourse web \
  --basic-auth-username myuser \
  --basic-auth-password mypass \
  --session-signing-key session_signing_key \
  --tsa-host-key tsa_host_key \
  --tsa-authorized-keys authorized_worker_keys \
  --postgres-data-source postgres://user:pass@10.0.32.0/concourse \
  --external-url https://ci.example.com \
  --peer-url http://10.0.16.11:8080

Starting Workers

Workers are Garden servers, continuously heartbeating their presence to the Concourse API. Workers have a statically configured platform and a set of tags, both of which determine where steps in a Build Plan are scheduled.

You may want a few workers, depending on the resource usage of your pipeline. There should be one per machine; running multiple on one box doesn't really make sense, as each worker runs as many containers as Concourse requests of it.

To spin up a worker and register it with your Concourse cluster running locally, run:

sudo concourse worker \
  --work-dir /opt/concourse/worker \
  --tsa-host 127.0.0.1 \
  --tsa-public-key tsa_host_key.pub \
  --tsa-worker-private-key worker_key

Note that the worker must be run as root, as it orchestrates containers.

The --work-dir flag specifies where container data should be placed; make sure it has plenty of disk space available, as it's where all the disk usage across your builds and resources will end up.

The --tsa-host refers to wherever your TSA node is listening, by default on port 2222 (pass --tsa-port if you've configured it differently). This may be an address to a load balancer if you're running multiple web nodes, or just an IP, perhaps 127.0.0.1 if you're running everything on one box.

The --tsa-public-key flag is used to ensure we're connecting to the TSA we should be connecting to, and is used like known_hosts with the ssh command. Refer to Generating Keys if you're not sure what this means.

The --tsa-worker-private-key flag specifies the key to use when authenticating to the TSA. Refer to Generating Keys if you're not sure what this means.