🚀 DAY 5 OF LAUNCH WEEK: Introducing Socket Firewall Enterprise.Learn more
Socket
Book a DemoInstallSign in
Socket

github.com/concourse/concourse-chart

Package Overview
Dependencies
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

github.com/concourse/concourse-chart

Source
Go
Version
v19.0.1+incompatible
Version published
Created
Source

Concourse Helm Chart

Concourse is a simple and scalable CI system.

TL;DR;

$ helm repo add concourse https://concourse-charts.storage.googleapis.com/
$ helm install my-release concourse/concourse

Introduction

This chart bootstraps a Concourse deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites Details

  • Kubernetes 1.6 (for pod affinity support)
  • PersistentVolume support on underlying infrastructure (if persistence is required)
  • Helm v3.x

Installing the Chart

To install the chart with the release name my-release:

$ helm install my-release concourse/concourse

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes nearly all the Kubernetes components associated with the chart and deletes the release.

PS: By default, a namespace is created for the main team named after ${RELEASE}-main and is kept untouched after a helm delete. See the Configuration section for how to control the behavior.

Cleanup orphaned Persistent Volumes

This chart uses StatefulSets for Concourse Workers. Deleting a StatefulSet does not delete associated PersistentVolumes.

Do the following after deleting the chart release to clean up orphaned Persistent Volumes.

$ kubectl delete pvc -l app=${RELEASE-NAME}-worker

Restarting workers

If a Worker isn't taking on work, you can recreate it with kubectl delete pod. This initiates a graceful shutdown by "retiring" the worker, to ensure Concourse doesn't try looking for old volumes on the new worker.

The valueworker.terminationGracePeriodSeconds can be used to provide an upper limit on graceful shutdown time before forcefully terminating the container.

Check the output of fly workers, and if a worker is stalled, you'll also need to run fly prune-worker to allow the new incarnation of the worker to start.

TIP: you can download fly either from https://concourse-ci.org/download.html or the home page of your Concourse installation.

When using ephemeral workers with worker.kind: Deployment and spawning a lot of (new) workers, you might run into issue 3091. As a workaround you could start a worker.extraInitContainers to cleanup unused loopback devices.

Worker Liveness Probe

By default, the worker's LivenessProbe will trigger a restart of the worker container if it detects errors when trying to reach the worker's healthcheck endpoint which takes care of making sure that the workers' components can properly serve their purpose.

See Configuration and values.yaml for the configuration of both the livenessProbe (worker.livenessProbe) and the default healthchecking timeout (concourse.worker.healthcheckTimeout).

Configuration

The following table lists the configurable parameters of the Concourse chart and their default values.

ParameterDescriptionDefault
fullnameOverrideProvide a name to substitute for the full names of resourcesnil
imageDigestSpecific image digest to use in place of a tag.nil
imagePullPolicyConcourse image pull policyIfNotPresent
imagePullSecretsArray of imagePullSecrets in the namespace for pulling images[]
imageTagConcourse image version7.14.2
imageConcourse imageconcourse/concourse
nameOverrideProvide a name in place of concourse for app: labelsnil
persistence.enabledEnable Concourse persistence using Persistent Volume Claimstrue
persistence.worker.accessModeConcourse Worker Persistent Volume Access ModeReadWriteOnce
persistence.worker.sizeConcourse Worker Persistent Volume Storage Size20Gi
persistence.worker.storageClassConcourse Worker Persistent Volume Storage Classgeneric
persistence.worker.labelsConcourse Worker Persistent Volume Labels{}
postgresql.enabledEnable PostgreSQL as a chart dependencytrue
postgresql.fullnameOverrideProvide a name to substitute for the full name of postgresql resourcesnil
postgresql.labelsAdd additional labels to the postgresql statefulSet{}
postgresql.service.enabledEnable postgresql servicetrue
postgresql.service.typeService typeClusterIP
postgresql.service.clusterIPsHardcode services IPs[]
postgresql.service.extraSpecAdd extra spec attributes to the postgresql service.{}
postgresql.imageSet the image repositorypostgres
postgresql.imageTagSet the image tag, exclusive with imageDigest.17
postgresql.imageDigestSet the image tag, exclusive with the imageTag""
postgresql.versionSet the postgresql major version, must match the one of your image.17
postgresql.customPgDataCustomize the PG_DATA path, defaults to /var/lib/postgres/{{postgresql.version}}/docker. Adjust the dataVolumeMountPath to match with the new PG_DATA. e.g /opt/postgresql/data"17"
postgresql.dataVolumeMountPathThe mountPath of the volume that will contains the PG_DATA e.g /opt/postgresqlnil
postgresql.securityContextAdd securityContext attributes to the statefulSetnil
postgresql.annotationsAdd annotations to the postgresql statefulsetnil
postgresql.secretAnnotationsAdd annotations to the secretnil
postgresql.configMapAnnotationsAdd annotations to the environment configmapnil
postgresql.configOverrideOverride the default postgresql config filenil
postgresql.resourcesSet the resources for the statefulSet{"requests":{"cpu":"250m","ephemeral-storage":"50Mi","memory":"256Mi"},"limits":{"cpu":"500m","ephemeral-storage":"2Gi","memory":"512Mi"}}
postgresql.auth.userSet the postgres userconcourse
postgresql.auth.passwordSet the postgres passwordconcourse
postgresql.auth.databaseSet the postgres database nameconcourse
postgresql.extraEnvironmentAdd extra arguments to the postgresql command{}
postgresql.extraArgsAdd extra environment variables{}
postgresql.commandOverrideOverride the command of postgres[]
postgresql.argsOverrideOverride the args of postgres[]
postgresql.sensitiveEnvironmentAdd extra sensitive env vars (will be injected with a secret){}
postgresql.lifecycleAdd a lifecycle attribute to the postgresql container, see the k8s docsnil
postgresql.persistence.enabledEnable PostgreSQL persistence using Persistent Volume Claimstrue
postgresql.persistence.pvcNameOverrideOverride the name of the pvc template in the postgresql statefulSet. Useful to re-use an existing pvc.""
postgresql.persistence.storageClassConcourse data Persistent Volume Storage Classnil
postgresql.persistence.accessModesPersistent Volume Access Mode["ReadWriteOnce"]
postgresql.persistence.resourcesSet storage requests and limits{ "requests": { "storage": "8Gi" } }
persistence.worker.selectorConcourse Worker Persistent Volume selectornil
rbac.apiVersionRBAC versionv1beta1
rbac.createEnables creation of RBAC resourcestrue
rbac.webServiceAccountNameName of the service account to use for web pods if rbac.create is falsedefault
rbac.webServiceAccountAnnotationsAny annotations to be attached to the web service account{}
rbac.workerServiceAccountNameName of the service account to use for workers if rbac.create is falsedefault
rbac.workerServiceAccountAnnotationsAny annotations to be attached to the worker service account{}
podSecurityPolicy.createEnables creation of podSecurityPolicy resourcesfalse
podSecurityPolicy.allowedWorkerVolumesList of volumes allowed by the podSecurityPolicy for the worker podsSee values.yaml
podSecurityPolicy.allowedWebVolumesList of volumes allowed by the podSecurityPolicy for the web podsSee values.yaml
secrets.annotationsAnnotations to be added to the secrets{}
secrets.awsSecretsmanagerAccessKeyAWS Access Key ID for Secrets Manager accessnil
secrets.awsSecretsmanagerSecretKeyAWS Secret Access Key ID for Secrets Manager accessnil
secrets.awsSecretsmanagerSessionTokenAWS Session Token for Secrets Manager accessnil
secrets.awsSsmAccessKeyAWS Access Key ID for SSM accessnil
secrets.awsSsmSecretKeyAWS Secret Access Key ID for SSM accessnil
secrets.awsSsmSessionTokenAWS Session Token for SSM accessnil
secrets.bitbucketCloudClientIdClient ID for the BitbucketCloud OAuthnil
secrets.bitbucketCloudClientSecretClient Secret for the BitbucketCloud OAuthnil
secrets.cfCaCertCA certificate for cf auth providernil
secrets.cfClientIdClient ID for cf auth providernil
secrets.cfClientSecretClient secret for cf auth providernil
secrets.conjurAccountAccount for Conjur auth providernil
secrets.conjurAuthnLoginHost username for Conjur auth providernil
secrets.conjurAuthnApiKeyAPI key for host used for Conjur auth provider. Either API key or token file can be used, but not both.nil
secrets.conjurAuthnTokenFileToken file used for Conjur auth provider if running in Kubernetes or IAM. Either token file or API key can be used, but not both.nil
secrets.conjurCACertCA Cert used if Conjur instance is deployed with a self-signed certificatenil
secrets.createCreate the secret resource from the following values. See Secretstrue
secrets.credhubCaCertValue of PEM-encoded CA cert file to use to verify the CredHub server SSL cert.nil
secrets.credhubClientIdClient ID for CredHub authorization.nil
secrets.credhubClientSecretClient secret for CredHub authorization.nil
secrets.credhubClientKeyClient key for Credhub authorization.nil
secrets.credhubClientCertClient cert for Credhub authorizationnil
secrets.encryptionKeycurrent encryption keynil
secrets.githubCaCertCA certificate for Enterprise Github OAuthnil
secrets.githubClientIdApplication client ID for GitHub OAuthnil
secrets.githubClientSecretApplication client secret for GitHub OAuthnil
secrets.gitlabClientIdApplication client ID for GitLab OAuthnil
secrets.gitlabClientSecretApplication client secret for GitLab OAuthnil
secrets.hostKeyPubConcourse Host Public KeySee values.yaml
secrets.hostKeyConcourse Host Private KeySee values.yaml
secrets.influxdbPasswordPassword used to authenticate with influxdbnil
secrets.ldapCaCertCA Certificate for LDAPnil
secrets.localUsersCreate concourse local users. Default username and password are test:test See values.yaml
secrets.microsoftClientIdClient ID for Microsoft authorization.nil
secrets.microsoftClientSecretClient secret for Microsoft authorization.nil
secrets.oauthCaCertCA certificate for Generic OAuthnil
secrets.oauthClientIdApplication client ID for Generic OAuthnil
secrets.oauthClientSecretApplication client secret for Generic OAuthnil
secrets.oidcCaCertCA certificate for OIDC Oauthnil
secrets.oidcClientIdApplication client ID for OIDI OAuthnil
secrets.oidcClientSecretApplication client secret for OIDC OAuthnil
secrets.oldEncryptionKeyold encryption key, used for key rotationnil
secrets.postgresCaCertPostgreSQL CA certificatenil
secrets.postgresClientCertPostgreSQL Client certificatenil
secrets.postgresClientKeyPostgreSQL Client keynil
secrets.postgresPasswordPostgreSQL User Passwordnil
secrets.postgresUserPostgreSQL User Namenil
secrets.samlCaCertCA Certificate for SAMLnil
secrets.sessionSigningKeyConcourse Session Signing Private KeySee values.yaml
secrets.syslogCaCertSSL certificate to verify Syslog servernil
secrets.teamAuthorizedKeysArray of team names and worker public keys for external workersnil
secrets.vaultAuthParamParamter to pass when logging in via the backendnil
secrets.vaultCaCertCA certificate use to verify the vault server SSL certnil
secrets.vaultClientCertVault Client Certificatenil
secrets.vaultClientKeyVault Client Keynil
secrets.vaultClientTokenVault periodic client tokennil
secrets.webTlsCertTLS certificate for the web component to terminate TLS connectionsnil
secrets.webTlsKeyAn RSA private key, used to encrypt HTTPS trafficnil
secrets.webTlsCaCertTLS CA certificate for the web component to terminate TLS connectionsnil
secrets.workerKeyPubConcourse Worker Public KeySee values.yaml
secrets.workerKeyConcourse Worker Private KeySee values.yaml
secrets.workerAdditionalCertsConcourse Worker Additional CertificatesSee values.yaml
web.additionalAffinitiesAdditional affinities to apply to web pods. E.g: node affinity{}
web.additionalVolumeMountsVolumeMounts to be added to the web podsnil
web.additionalVolumesVolumes to be added to the web podsnil
web.annotationsAnnotations to be added to the web pods{}
web.authSecretsPathSpecify the mount directory of the web auth secrets/concourse-auth
web.credhubSecretsPathSpecify the mount directory of the web credhub secrets/concourse-credhub
web.datadog.agentHostUseHostIPUse IP of Pod's node overrides agentHostfalse
web.datadog.agentHostDatadog Agent host127.0.0.1
web.datadog.agentPortDatadog Agent port8125
web.datadog.agentUdsFilepathDatadog agent unix domain socket (uds) filepath to expose dogstatsd metrics (ex. /tmp/datadog.socket)nil
web.datadog.enabledEnable or disable Datadog metricsfalse
web.datadog.prefixPrefix for emitted metrics"concourse.ci"
web.enabledEnable or disable the web componenttrue
web.envConfigure additional environment variables for the web containers[]
web.commandOverride the docker image commandnil
web.argsDocker image command arguments["web"]
web.ingress.annotationsConcourse Web Ingress annotations{}
web.ingress.enabledEnable Concourse Web Ingressfalse
web.ingress.hostsConcourse Web Ingress Hostnames[]
web.ingress.ingressClassNameIngressClass to register tonil
web.ingress.rulesOverrideConcourse Web Ingress rules (override) (alternate to web.ingress.hosts)[]
web.ingress.tlsConcourse Web Ingress TLS configuration[]
web.route.annotationsConcourse Web HTTPRoute annotations{}
web.route.enabledEnable Concourse Web HTTPRoutefalse
web.route.hostnamesConcourse Web HTTPRoutes Hostnames[]
web.route.parentRefsConcourse Web HTTPRoute parentRefs (gateways)[]
web.route.labelsConcourse Web HTTPRoute labels[]
web.keySecretsPathSpecify the mount directory of the web keys secrets/concourse-keys
web.labelsAdditional labels to be added to the web deployment metadata.labels{}
web.deploymentAnnotationsAdditional annotations to be added to the web deployment metadata.annotations{}
web.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded5
web.livenessProbe.httpGet.pathPath to access on the HTTP server when performing the healthcheck/api/v1/info
web.livenessProbe.httpGet.portName or number of the port to access on the containeratc
web.livenessProbe.initialDelaySecondsNumber of seconds after the container has started before liveness probes are initiated10
web.livenessProbe.periodSecondsHow often (in seconds) to perform the probe15
web.livenessProbe.timeoutSecondsNumber of seconds after which the probe times out3
web.nameOverrideOverride the Concourse Web components namenil
web.nodeSelectorNode selector for web nodes{}
web.podLabelsAdditional labels to be added to the web deployment spec.template.metadata.labels, setting pods metadata.labels{}
web.postgresqlSecretsPathSpecify the mount directory of the web postgresql secrets/concourse-postgresql
web.prometheus.enabledEnable the Prometheus metrics endpointfalse
web.prometheus.bindIpIP to listen on to expose Prometheus metrics0.0.0.0
web.prometheus.bindPortPort to listen on to expose Prometheus metrics9391
web.prometheus.ServiceMonitor.enabledEnable the creation of a serviceMonitor object for the Prometheus operatorfalse
web.prometheus.ServiceMonitor.intervalThe interval the Prometheus endpoint is scraped30s
web.prometheus.ServiceMonitor.namespaceThe namespace where the serviceMonitor object has to be creatednil
web.prometheus.ServiceMonitor.labelsAdditional labels for the serviceMonitor objectnil
web.prometheus.ServiceMonitor.metricRelabelingsRelabel metrics as defined herenil
web.readinessProbe.httpGet.pathPath to access on the HTTP server when performing the healthcheck/api/v1/info
web.readinessProbe.httpGet.portName or number of the port to access on the containeratc
web.replicasNumber of Concourse Web replicas1
web.resources.requests.cpuMinimum amount of cpu resources requested100m
web.resources.requests.memoryMinimum amount of memory resources requested128Mi
web.service.api.annotationsConcourse Web API Service annotationsnil
web.service.api.NodePortSets the nodePort for api when using NodePortnil
web.service.api.labelsAdditional concourse web api service labelsnil
web.service.api.loadBalancerIPThe IP to use when web.service.api.type is LoadBalancernil
web.service.api.clusterIPThe IP to use when web.service.api.type is ClusterIPnil
web.service.api.loadBalancerSourceRangesConcourse Web API Service Load Balancer Source IP rangesnil
web.service.api.tlsNodePortSets the nodePort for api tls when using NodePortnil
web.service.api.typeConcourse Web API service typeClusterIP
web.service.api.port.nameSets the port name for web service with targetPort atcatc
web.service.api.tlsPort.nameSets the port name for web service with targetPort atc-tlsatc-tls
web.service.workerGateway.annotationsConcourse Web workerGateway Service annotationsnil
web.service.workerGateway.labelsAdditional concourse web workerGateway service labelsnil
web.service.workerGateway.loadBalancerIPThe IP to use when web.service.workerGateway.type is LoadBalancernil
web.service.workerGateway.clusterIPThe IP to use when web.service.workerGateway.type is ClusterIPNone
web.service.workerGateway.loadBalancerSourceRangesConcourse Web workerGateway Service Load Balancer Source IP rangesnil
web.service.workerGateway.NodePortSets the nodePort for workerGateway when using NodePortnil
web.service.workerGateway.typeConcourse Web workerGateway service typeClusterIP
web.service.prometheus.annotationsConcourse Web Prometheus Service annotationsnil
web.service.prometheus.labelsAdditional concourse web prometheus service labelsnil
web.shareProcessNamespaceEnable or disable the process namespace sharing for the web nodesfalse
web.priorityClassNameSets a PriorityClass for the web podsnil
web.sidecarContainersArray of extra containers to run alongside the Concourse web containernil
web.databaseInitContainersArray of database init containers to run before the Concourse database migrations are appliednil
web.extraInitContainersArray of extra init containers to run before the Concourse web containernil
web.strategyStrategy for updates to deployment.{}
web.syslogSecretsPathSpecify the mount directory of the web syslog secrets/concourse-syslog
web.tlsSecretsPathWhere in the container the web TLS secrets should be mounted/concourse-web-tls
web.tolerationsTolerations for the web nodes[]
web.vaultSecretsPathSpecify the mount directory of the web vault secrets/concourse-vault
web.vault.tokenPathSpecify the path to a file containing a vault client authentication tokennil
worker.additionalAffinitiesAdditional affinities to apply to worker pods. E.g: node affinity{}
worker.additionalVolumeMountsVolumeMounts to be added to the worker podsnil
worker.additionalPortsAdditional ports to be added to worker pods[]
worker.additionalVolumesVolumes to be added to the worker podsnil
worker.annotationsAnnotations to be added to the worker pods{}
worker.autoscalingEnable and configure pod autoscaling{}
worker.cleanUpWorkDirOnStartRemoves any previous state created in concourse.worker.workDirtrue
worker.emptyDirSizeWhen persistance is disabled this value will be used to limit the emptyDir volume sizenil
worker.enabledEnable or disable the worker component. You should set postgres.enabled=false in order not to get an unnecessary Postgres chart deployedtrue
worker.envConfigure additional environment variables for the worker container(s)[]
worker.hardAntiAffinityShould the workers be forced (as opposed to preferred) to be on different nodes?false
worker.hardAntiAffinityLabelsSet of labels used for hard anti affinity rule{}
worker.keySecretsPathSpecify the mount directory of the worker keys secrets/concourse-keys
worker.deploymentAnnotationsAdditional annotations to be added to the worker deployment metadata.annotations{}
worker.certsPathSpecify the path for additional worker certificates/etc/ssl/certs
worker.kindChoose between StatefulSet to preserve state or Deployment for ephemeral workersStatefulSet
worker.livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded5
worker.livenessProbe.httpGet.pathPath to access on the HTTP server when performing the healthcheck/
worker.livenessProbe.httpGet.portName or number of the port to access on the containerworker-hc
worker.livenessProbe.initialDelaySecondsNumber of seconds after the container has started before liveness probes are initiated10
worker.livenessProbe.periodSecondsHow often (in seconds) to perform the probe15
worker.livenessProbe.timeoutSecondsNumber of seconds after which the probe times out3
worker.minAvailableMinimum number of workers available after an eviction1
worker.nameOverrideOverride the Concourse Worker components namenil
worker.nodeSelectorNode selector for worker nodes{}
worker.podManagementPolicyOrderedReady or Parallel (requires Kubernetes >= 1.7)Parallel
worker.readinessProbePeriodic probe of container service readiness{}
worker.replicasNumber of Concourse Worker replicas2
worker.resources.requests.cpuMinimum amount of cpu resources requested100m
worker.resources.requests.memoryMinimum amount of memory resources requested512Mi
worker.sidecarContainersArray of extra containers to run alongside the Concourse worker containernil
worker.extraInitContainersArray of extra init containers to run before the Concourse worker containernil
worker.priorityClassNameSets a PriorityClass for the worker podsnil
worker.terminationGracePeriodSecondsUpper bound for graceful shutdown to allow the worker to drain its tasks60
worker.tolerationsTolerations for the worker nodes[]
worker.persistentVolumeClaimRetentionPolicyRetain or Delete (requires Kubernetes >= 1.32)Retain
worker.updateStrategyOnDelete or RollingUpdate (requires Kubernetes >= 1.7)RollingUpdate

For configurable Concourse parameters, refer to values.yaml' concourse section. All parameters under this section are strictly mapped from the concourse binary commands.

For example if one needs to configure the Concourse external URL, the param concourse -> web -> externalUrl should be set, which is equivalent to running the concourse binary as concourse web --external-url.

For those sub-sections that have enabled, one needs to set enabled to be true to use the following params within the section.

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install my-release -f values.yaml concourse/concourse

Tip: You can use the default values.yaml

Secrets

For your convenience, this chart provides some default values for secrets, but it is recommended that you generate and manage these secrets outside the Helm chart.

To do that, set secrets.create to false, create files for each secret value, and turn it all into a Kubernetes Secret.

Be careful with introducing trailing newline characters; following the steps below ensures none end up in your secrets. First, perform the following to create the mandatory secret values:

# Create a directory to host the set of secrets that are
# required for a working Concourse installation and get
# into it.
#
mkdir concourse-secrets
cd concourse-secrets

Concourse needs three sets of key-pairs in order to work:

  • web key pair,
  • worker key pair, and
  • the session signing token.

You can generate all three key-pairs by following either of these two methods:

Concourse Binary
docker run -v $PWD:/keys --rm -it concourse/concourse generate-key -t rsa -f /keys/session-signing-key
docker run -v $PWD:/keys --rm -it concourse/concourse generate-key -t ssh -f /keys/worker-key
docker run -v $PWD:/keys --rm -it concourse/concourse generate-key -t ssh -f /keys/host-key
rm session-signing-key.pub
ssh-keygen
ssh-keygen -t rsa -f host-key  -N '' -m PEM
ssh-keygen -t rsa -f worker-key  -N '' -m PEM
ssh-keygen -t rsa -f session-signing-key  -N '' -m PEM
rm session-signing-key.pub

Optional Features

You'll also need to create/copy secret values for optional features. See templates/web-secrets.yaml and templates/worker-secrets.yaml for possible values.

In the example below, we are not using the PostgreSQL chart dependency, and so we must set postgresql-user and postgresql-password secrets.

# Still within the directory where our secrets exist,
# copy a postgres user to clipboard and paste it to file.
#
printf "%s" "$(pbpaste)" > postgresql-user

# Copy a postgres password to clipboard and paste it to file
#
printf "%s" "$(pbpaste)" > postgresql-password

# Copy Github client id and secrets to clipboard and paste to files
#
printf "%s" "$(pbpaste)" > github-client-id
printf "%s" "$(pbpaste)" > github-client-secret

# Set an encryption key for DB encryption at rest
#
printf "%s" "$(openssl rand -base64 24)" > encryption-key

# Create a local user for concourse.
#
printf "%s:%s" "concourse" "$(openssl rand -base64 24)" > local-users

Creating the Secrets

Make a directory for each secret and then move generated credentials into appropriate directories.

mkdir concourse web worker

# worker secrets
mv host-key.pub worker/host-key-pub
mv worker-key.pub worker/worker-key-pub
mv worker-key worker/worker-key

# web secrets
mv session-signing-key web/session-signing-key
mv host-key web/host-key
cp worker/worker-key-pub web/worker-key-pub
mv local-users web/local-users

# other concourse secrets (there may be more than the 3 listed below)
mv encryption-key concourse/encryption-key
mv postgresql-password concourse/postgresql-password
mv postgresql-user concourse/postgresql-user

Then create the secrets from each of the 3 directories:

kubectl create secret generic [my-release]-worker --from-file=worker/

kubectl create secret generic [my-release]-web --from-file=web/

kubectl create secret generic [my-release]-concourse --from-file=concourse/

Make sure you clean up after yourself.

Persistence

This chart mounts a Persistent Volume for each Concourse Worker.

The volume is created using dynamic volume provisioning.

If you want to disable it or change the persistence properties, update the persistence section of your custom values.yaml file:

## Persistent Volume Storage configuration.
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
##
persistence:
  ## Enable persistence using Persistent Volume Claims.
  ##
  enabled: true

  ## Worker Persistence configuration.
  ##
  worker:
    ## Persistent Volume Storage Class.
    ##
    storageClass: generic

    ## Persistent Volume Access Mode.
    ##
    accessMode: ReadWriteOnce

    ## Persistent Volume Storage Size.
    ##
    size: "20Gi"

It is highly recommended to use Persistent Volumes for Concourse Workers; otherwise, the Concourse volumes managed by the Worker are stored in an emptyDir volume on the Kubernetes node's disk. This will interfere with Kubernete's ImageGC and the node's disk will fill up as a result.

Ingress TLS

If your cluster allows automatic creation/retrieval of TLS certificates (e.g. cert-manager), please refer to the documentation for that mechanism.

To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:

kubectl create secret tls concourse-web-tls --cert=path/to/tls.cert --key=path/to/tls.key

Include the secret's name, along with the desired hostnames, in the web.ingress.tls section of your custom values.yaml file:

## Configuration values for Concourse Web components.
##
web:
  ## Ingress configuration.
  ## ref: https://kubernetes.io/docs/user-guide/ingress/
  ##
  ingress:
    ## Enable ingress.
    ##
    enabled: true

    ## Hostnames.
    ## Either `hosts` or `rulesOverride` must be provided if Ingress is enabled.
    ## `hosts` sets up the Ingress with default rules per provided hostname.
    ##
    hosts:
      - concourse.domain.com

    ## Ingress rules override
    ## Either `hosts` or `rulesOverride` must be provided if Ingress is enabled.
    ## `rulesOverride` allows the user to define the full set of ingress rules, for more complex Ingress setups.
    ##
    ##
    rulesOverride:
      - host: concourse.domain.com
        http:
          paths:
            - path: '/*'
              backend:
                serviceName: "ssl-redirect"
                servicePort: "use-annotation"
            - path: '/*'
              backend:
                serviceName: "concourse-web"
                servicePort: atc

    ## TLS configuration.
    ## Secrets must be manually created in the namespace.
    ##
    tls:
      - secretName: concourse-web-tls
        hosts:
          - concourse.domain.com

PostgreSQL

By default, this chart deploys a single postgresql instance as a statefulSet, the conection details will be shared with concourse. You can change the connection details using the attributes of the postgresql.auth.

You can also bring your own PostgreSQL. To do so, set postgresql.enabled to false, and then configure Concourse's postgres values (concourse.web.postgres.*) See values.yaml.

Note that Concourse (by default) will attempt to read in some values directly from secrets. Therefore, these values do not have to be explicitly passed into Concourse as individual arguments, or as members of lists or files. (e.g. Part of values.yaml, etc.) Some examples of these secret values are: postgresql-user, postgresql-password (in secret "[my-release]-concourse" or "[my-release]-web"), and others. See templates/web-secrets.yaml for possible values and the secrets section on this README for guidance on how to set those secrets.

Credential Management

Pipelines usually need credentials to do things. Concourse supports the use of a Credential Manager so your pipelines can contain references to secrets instead of the actual secret values. You can't use more than one credential manager at a time.

Kubernetes Secrets

By default, this chart uses Kubernetes Secrets as a credential manager.

For a given Concourse team, a pipeline looks for secrets in a namespace named [namespacePrefix][teamName]. The namespace prefix is the release name followed by a hyphen by default, and can be overridden with the value concourse.web.kubernetes.namespacePrefix. Each team listed under concourse.web.kubernetes.teams will have a namespace created for it, and the namespace remains after deletion of the release unless you set concourse.web.kubernetes.keepNamespace to false. By default, a namespace will be created for the main team.

The service account used by Concourse must have get access to secrets in that namespace. When rbac.create is true, this access is granted for each team listed under concourse.web.kubernetes.teams.

Here are some examples of the lookup heuristics, given release name concourse:

In team accounting-dev, pipeline my-app; the expression ((api-key)) resolves to:

  • the secret value in namespace: concourse-accounting-dev secret: my-app.api-key, key: value
  • and if not found, is the value in namespace: concourse-accounting-dev secret: api-key, key: value

In team accounting-dev, pipeline my-app, the expression ((common-secrets.api-key)) resolves to:

  • the secret value in namespace: concourse-accounting-dev secret: my-app.common-secrets, key: api-key
  • and if not found, is the value in namespace: concourse-accounting-dev secret: common-secrets, key: api-key

Be mindful of your team and pipeline names, to ensure they can be used in namespace and secret names, e.g. no underscores.

To test, create a secret in namespace concourse-main:

kubectl create secret generic hello --from-literal 'value=Hello world!'

Then fly set-pipeline with the following pipeline, and trigger it:

jobs:
- name: hello-world
  plan:
  - task: say-hello
    config:
      platform: linux
      image_resource:
        type: docker-image
        source: {repository: alpine}
      params:
        HELLO: ((hello))
      run:
        path: /bin/sh
        args: ["-c", "echo $HELLO"]

Hashicorp Vault

To use Vault, set concourse.web.kubernetes.enabled to false, and set the following values:

## Configuration values for the Credential Manager.
## ref: https://concourse-ci.org/creds.html
##
concourse:
  web:
    vault:
      ## Use Hashicorp Vault for the Credential Manager.
      ##
      enabled: true

      ## URL pointing to vault addr (i.e. http://vault:8200).
      ##
      url:

      ## vault path under which to namespace credential lookup, defaults to /concourse.
      ##
      pathPrefix:

Credhub

To use Credhub, set concourse.web.kubernetes.enabled to false, and consider the following values:

## Configuration for using Credhub as a credential manager.
## Ref: https://concourse-ci.org/credhub-credential-manager.html
##
concourse:
  web:
    credhub:
      ## Enable the use of Credhub as a credential manager.
      ##
      enabled: true

      ## CredHub server address used to access secrets
      ## Example: https://credhub.example.com
      ##
      url:

      ## Path under which to namespace credential lookup. (default: /concourse)
      ##
      pathPrefix:

      ## Enables using a CA Certificate
      ##
      useCaCert: false

      ## Enables insecure SSL verification.
      ##
      insecureSkipVerify: false

Conjur

To use Conjur, set concourse.web.kubernetes.enabled to false, and set the following values:

## Configuration for using Conjur as a credential manager.
## Ref: https://concourse-ci.org/conjur-credential-manager.html
##
concourse:
  web:
    conjur:
      ## Enable the use of Conjur as a credential manager.
      ##
      enabled: true

      ## Conjur server address used to access secrets
      ## Example: https://conjur.example.com
      ##
      applianceUrl:

      ## Base path used to locate a vault or safe-level secret
      ## Default: vaultName/{{.Secret}})
      ##
      secretTemplate:

      ## Base path used to locate a team-level secret
      ## Default: concourse/{{.Team}}/{{.Secret}}
      ##
      teamSecretTemplate:

      ## Base path used to locate a pipeline-level secret
      ## Default: concourse/{{.Team}}/{{.Pipeline}}/{{.Secret}}
      ##
      pipelineSecretTemplate:
secrets:
  # Org account.
  conjurAccount:

  # Host username. E.g host/concourse
  conjurAuthnLogin:

  # Api key related to the host.
  conjurAuthnApiKey:

  # Token file used if conjur instance is running in k8s or iam. E.g. /path/to/token_file
  conjurAuthnTokenFile:

  # CA Certificate to specify if conjur instance is deployed with a self-signed cert
  conjurCACert:

You can specify either conjurAuthnApiKey that corresponds to the Conjur host OR conjurAuthnTokenFile if running in K8s or IAM.

If your Conjur instance is deployed with a self-signed SSL certificate, you will need to set conjurCACert property in your values.yaml.

AWS Systems Manager Parameter Store (SSM)

To use SSM, set concourse.web.kubernetes.enabled to false, and set concourse.web.awsSsm.enabled to true.

Authentication can be configured to use an access key and secret key as well as a session token. This is done by setting concourse.web.awsSsm.keyAuth.enabled to true. Alternatively, if it set to false, AWS IAM role based authentication (instance or pod credentials) is assumed. To use a session token, concourse.web.awsSsm.useSessionToken should be set to true. The secret values can be managed using the values specified in this helm chart or separately. For more details, see https://concourse-ci.org/creds.html#ssm.

For a given Concourse team, a pipeline looks for secrets in SSM using either /concourse/{team}/{secret} or /concourse/{team}/{pipeline}/{secret}; the patterns can be overridden using the concourse.web.awsSsm.teamSecretTemplate and concourse.web.awsSsm.pipelineSecretTemplate settings.

Concourse requires AWS credentials which are able to read from SSM for this feature to function. Credentials can be set in the secrets.awsSsm* settings; if your cluster is running in a different AWS region, you may also need to set concourse.web.awsSsm.region.

The minimum IAM policy you need to use SSM with Concourse is:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "kms:Decrypt",
      "Resource": "<kms-key-arn>",
      "Effect": "Allow"
    },
    {
      "Action": "ssm:GetParameter*",
      "Resource": "<...arn...>:parameter/concourse/*",
      "Effect": "Allow"
    }
  ]
}

Where <kms-key-arn> is the ARN of the KMS key used to encrypt the secrets in Parameter Store, and the <...arn...> should be replaced with a correct ARN for your account and region's Parameter Store.

AWS Secrets Manager

To use Secrets Manager, set concourse.web.kubernetes.enabled to false, and set concourse.web.awsSecretsManager.enabled to true.

Authentication can be configured to use an access key and secret key as well as a session token. This is done by setting concourse.web.awsSecretsManager.keyAuth.enabled to true. Alternatively, if it set to false, AWS IAM role based authentication (instance or pod credentials) is assumed. To use a session token, concourse.web.awsSecretsManager.useSessionToken should be set to true. The secret values can be managed using the values specified in this helm chart or separately. For more details, see https://concourse-ci.org/creds.html#asm.

For a given Concourse team, a pipeline looks for secrets in Secrets Manager using either /concourse/{team}/{secret} or /concourse/{team}/{pipeline}/{secret}; the patterns can be overridden using the concourse.web.awsSecretsManager.teamSecretTemplate and concourse.web.awsSecretsManager.pipelineSecretTemplate settings.

Concourse requires AWS credentials which are able to read from Secrets Manager for this feature to function. Credentials can be set in the secrets.awsSecretsmanager* settings; if your cluster is running in a different AWS region, you may also need to set concourse.web.awsSecretsManager.region.

The minimum IAM policy you need to use Secrets Manager with Concourse is:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowAccessToSecretManagerParameters",
      "Effect": "Allow",
      "Action": [
        "secretsmanager:ListSecrets"
      ],
      "Resource": "*"
    },
    {
      "Sid": "AllowAccessGetSecret",
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret"
      ],
      "Resource": [
        "arn:aws:secretsmanager:::secret:/concourse/*"
      ]
    }
  ]
}

Developing

When adding a new Concourse flag, don't assign a default value in the values.yml that mirrors a default set by the Concourse binary.

Instead, you may add a comment specifying the default, such as

    ## pipeline-specific template for SSM parameters, defaults to: /concourse/{{.Team}}/{{.Pipeline}}/{{.Secret}}
    ##
    pipelineSecretTemplate:

This prevents the behaviour drifting from that of the binary in case the binary's default values change.

We understand that the comment stating the binary's default can become stale. The current solution is a suboptimal one. It may be improved in the future by generating a list of the default values from the binary.

Helm Unit Test

When running unit tests for helm, from the root of the repository, you can simply run the following.

helm unittest -f test/unittest/**/*.yaml .

If you are debugging specific tests, simply target the folder or yaml file you want to run tests on.

Folder

helm unittest -f test/unittest/gateway-apis/*.yaml .

Specific Test Suite

helm unittest -f test/unittest/gateway-apis/web-route-test.yaml .

FAQs

Package last updated on 07 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts