claranet-tfwrapper

tfwrapper
is a python wrapper for OpenTofu and legacy Terraform which aims to simplify their usage and enforce best practices.
Note: the term Terraform is used in this documentation when talking about generic concepts like providers, modules, stacks and the HCL based domain specific language.
Table Of Contents
Features
- OpenTofu and Terraform behaviour overriding
- State centralization enforcement
- Standardized file structure
- Stack initialization from templates
- AWS credentials caching
- Azure credentials loading (both Service Principal or User)
- GCP and GKE user ADC support
- Plugins caching
- Tab completion
Setup Dependencies
python3
>= 3.8.1 <4.0
python3-pip
python3-venv
pipx
(recommended)
Runtime Dependencies
Recommended setup
- OpenTofu 1.6+ (recommended) or Terraform 1.0+ (warning: versions above 1.6 are not open-source, and may cause legal issues depending on the context you are using it).
- An AWS S3 bucket and DynamoDB table for state centralization in AWS.
- An Azure Blob Storage container for state centralization in Azure.
Installation
tfwrapper should installed using pipx (recommended) or pip:
pipx install claranet-tfwrapper
Setup command-line completion
Add the following to your shell's interactive configuration file, e.g. .bashrc
for bash:
eval "$(register-python-argcomplete tfwrapper -e tfwrapper)"
You can then press the completion key (usually Tab ↹
) twice to get your partially typed tfwrapper
commands completed.
Note: the -e tfwrapper
parameter adds an suffix to the defined _python_argcomplete
function to avoid clashes with other packages (see https://github.com/kislyuk/argcomplete/issues/310#issuecomment-697168326 for context).
Upgrade from tfwrapper v7 or older
If you used versions of the wrapper older than v8, there is not much to do when upgrading to v8
except a little cleanup.
Indeed, the wrapper is no longer installed as a git submodule of your project like it used to be instructed and there is no longer any Makefile
to activate it.
Just clean up each project by destroying the .wrapper
submodule:
git rm -f Makefile
git submodule deinit .wrapper
rm -rf .git/modules/.wrapper
git rm -f .wrapper
Then check the staged changes and commit them.
Required files
tfwrapper expects multiple files and directories at the root of a project.
conf
Stacks configurations are stored in the conf
directory.
templates
The templates
directory is used to store the state backend configuration template and the Terraform stack templates used to initialize new stacks. Using a git submodule is recommended.
The following files are required:
templates/{provider}/common/state.tf.jinja2
: AWS S3 or Azure Blob Storage state backend configuration template.templates/{provider}/basic/main.tf
: the default Terraform configuration for new stacks. The whole template/{provider}/basic
directory is copied on stack initialization.
For example with AWS:
mkdir -p templates/aws/common templates/aws/basic
cat << 'EOF' > templates/aws/common/state.tf.jinja2
{% if region is not none %}
{% set region = '/' + region + '/' %}
{% else %}
{% set region = '/' %}
{% endif %}
terraform {
backend "s3" {
bucket = "my-centralized-terraform-states-bucket"
key = "{{ client_name }}/{{ account }}/{{ environment }}{{ region }}{{ stack }}/terraform.state"
region = "eu-west-1"
dynamodb_table = "my-terraform-states-lock-table"
}
}
resource "null_resource" "state-test" {}
EOF
cat << 'EOF' > templates/aws/basic/main.tf
provider "aws" {
region = var.aws_region
access_key = var.aws_access_key
secret_key = var.aws_secret_key
token = var.aws_token
}
EOF
For example with Azure:
mkdir -p templates/azure/common templates/azure/basic
cat << 'EOF' > templates/azure/common/state.tf.jinja2
{% if region is not none %}
{% set region = '/' + region + '/' %}
{% else %}
{% set region = '/' %}
{% endif %}
terraform {
backend "azurerm" {
subscription_id = "00000000-0000-0000-0000-000000000000"
resource_group_name = "my-resource-group"
storage_account_name = "my-centralized-terraform-states-account"
container_name = "terraform-states"
key = "{{ client_name }}/{{ account }}/{{ environment }}{{ region }}{{ stack }}/terraform.state"
}
}
EOF
cat << 'EOF' > templates/azure/basic/main.tf
provider "azurerm" {
subscription_id = var.azure_subscription_id
tenant_id = var.azure_tenant_id
}
EOF
.run
The .run
directory is used for credentials caching and plan storage.
mkdir .run
cat << 'EOF' > .run/.gitignore
*
!.gitignore
EOF
.gitignore
Adding the following .gitignore
at the root of your project is recommended:
cat << 'EOF' > .gitignore
.terraform
terraform.tfstate
terraform.tfstate.backup
terraform.tfvars
EOF
Configuration
tfwrapper uses yaml files stored in the conf
directory of the project.
tfwrapper configuration
tfwrapper uses some default behaviors that can be overridden or modified via a config.yml
file in the conf
directory.
---
always_trigger_init: False
pipe_plan_command: "cat"
use_local_azure_session_directory: False
Stacks configurations
Stacks configuration files use the following naming convention:
conf/${account}_${environment}_${region}_${stack}.yml
Here is an example for an AWS stack configuration:
---
state_configuration_name: "aws"
aws:
general:
account: &aws_account "xxxxxxxxxxx"
region: &aws_region eu-west-1
credentials:
profile: my-aws-profile
terraform:
legacy: false
version: "1.0"
vars:
aws_account: *aws_account
aws_region: *aws_region
client_name: my-client-name
Here is an example for a stack on Azure configuration using user mode and AWS S3 backend for state storage:
---
state_configuration_name: "aws-demo"
azure:
general:
mode: user
directory_id: &directory_id "00000000-0000-0000-0000-000000000000"
subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111"
terraform:
legacy: false
version: "1.0"
vars:
subscription_id: *subscription_id
directory_id: *directory_id
client_name: client-name
It is using your account linked to a Microsoft Account. You must have access to the Azure Subscription if you want to use Terraform.
Here is an example for a stack on Azure configuration using Service Principal mode:
---
azure:
general:
mode: service_principal
directory_id: &directory_id "00000000-0000-0000-0000-000000000000"
subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111"
credentials:
profile: customer-profile
terraform:
legacy: false
version: "1.0"
vars:
subscription_id: *subscription_id
directory_id: *directory_id
client_name: client-name
The wrapper uses the Service Principal's credentials to connect the Azure subscription. The given Service Principal must have access to the subscription.
The wrapper loads client_id
, client_secret
and tenant_id
properties from your config.yml
file located in ~/.azurerm/config.yml
.
~/.azurerm/config.yml
file structure example:
---
claranet-sandbox:
client_id: aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz
client_secret: AAbbbCCCzzz==
tenant_id: 00000000-0000-0000-0000-000000000000
customer-profile:
client_id: aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz
client_secret: AAbbbCCCzzz==
tenant_id: 00000000-0000-0000-0000-000000000000
Here is an example for a GCP/GKE stack with user ADC and multiple GKE instances:
---
gcp:
general:
mode: adc-user
project: &gcp_project project-name
gke:
- name: kubernetes-1
zone: europe-west1-c
- name: kubernetes-2
region: europe-west1
terraform:
legacy: false
version: "1.0"
vars:
gcp_region: europe-west1
gcp_zone: europe-west1-c
gcp_project: *gcp_project
client_name: client-name
You can declare multiple providers configurations, context is set up accordingly.
⚠ This feature is only supported for Azure stacks for now and only works with Azure authentication isolation
---
azure:
general:
mode: service_principal
directory_id: &directory_id "00000000-0000-0000-0000-000000000000"
subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111"
credentials:
profile: customer-profile
alternative:
mode: service_principal
directory_id: "00000000-0000-0000-0000-000000000000"
subscription_id: "22222222-2222-2222-2222-222222222222"
credentials:
profile: claranet-sandbox
terraform:
version: "1.0"
legacy: false
vars:
subscription_id: *subscription_id
directory_id: *directory_id
client_name: client-name
This configuration is useful when having various service principals with a dedicated rights scope for each.
The wrapper will generate the following Terraform variables that can be used in your stack:
<config_name>_azure_subscription_id
with Azure subscription ID. From the example, variable is: alternative_subscription_id = "22222222-2222-2222-2222-222222222222"
<config_name>_azure_tenant_id
with Azure tenant ID. From the example, variable is: alternative_tenant_id = "00000000-0000-0000-0000-000000000000"
<config_name>_azure_client_id
with Service Principal client id. From the example, variable is: alternative_client_id = "aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz"
<config_name>_azure_client_secret
with Service Principal client secret. From the example, variable is: alternative_client_secret = "AAbbbCCCzzz=="
Also, an isolation context is set to the local .run/aure_<config_name>
directory for each configuration.
States centralization configuration
The conf/state.yml
configuration file defines the configuration used to connect to state backends.
These backends can be of AWS S3 and/or AzureRM types.
The resources for these backends are not created by tfwrapper, and thus must exist beforehand:
- AWS: an S3 bucket (and optionally but highly recommended a DynamoDB table for locking). It is also recommended to enable versioning on the S3 bucket.
- Azure: a Blob storage account
You can use other backends (e.g. Google GCS or Hashicorp Consul) not specifically supported by the wrapper, if you manage authentication yourself and omit the conf/state.yml
file or make it empty:
---
Example configuration with both AWS and Azure backends defined:
---
aws:
- name: "aws-demo"
general:
account: "xxxxxxxxxxx"
region: eu-west-1
credentials:
profile: my-state-aws-profile
azure:
- name: "azure-backend"
general:
subscription_id: "xxxxxxx"
resource_group_name: "tfstates-xxxxx-rg"
storage_account_name: "tfstatesxxxxx"
- name: "azure-alternative"
general:
subscription_id: "xxxxxxx"
resource_group_name: "tfstates-xxxxx-rg"
storage_account_name: "tfstatesxxxxx"
- name: "azure-ad-auth"
general:
subscription_id: "xxxxxxx"
resource_group_name: "tfstates-xxxxx-rg"
storage_account_name: "tfstatesxxxxx"
azuread_auth: true
backend_parameters:
state_snaphot: "false"
Note: the first backend will be the default one for stacks not defining state_backend_type
.
How to migrate from one backend to another for state centralization
If for example you have both an AWS and Azure state backend configured in your conf/state.yml
file,
you can migrate your stack state from one backend to another.
Here is a quick howto:
- Make sure your stack is clean:
$ cd account/path/env/your_stack
$ tfwrapper init
$ tfwrapper plan
- Change your backend in the stack configuration yaml file:
---
state_configuration_name: "azure-alternative"
- Back in your stack directory, you can perform the change:
$ cd account/path/env/your_stack
$ rm -v state.tf
$ tfwrapper bootstrap
$ tfwrapper init
$ tfwrapper plan
Stacks file structure
Terraform stacks are organized based on their:
account
: an account alias which may refer to provider accounts or subscriptions, e.g. project-a-prod
, customer-b-dev
.environment
: production
, preproduction
, dev
, etc. With global
as a special case eliminating the region
part.region
: eu-west-1
, westeurope
, etc.stack
: defaults to default
. web
, admin
, tools
, etc.
The following file structure is then enforced:
# project root
└── account
│ └── environment
│ └── region
│ └── stack
└── account
└── _global
└── stack
A real-life example:
# project root
├── aws-account-1
│ ├── _global
│ │ └── default
│ │ └── main.tf
│ └── production
│ ├── eu-central-1
│ │ └── web
│ │ └── main.tf
│ └── eu-west-1
│ ├── default
│ │ └── main.t
│ └── tools
│ └── main.tf
└── aws-account-2
└── backup
└── eu-west-1
└── backup
└── main.tf
Usage
Stack bootstrap
After creating a conf/${account}_${environment}_${region}_${stack}.yml
stack configuration file you can bootstrap it.
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap aws/foobar
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap mycustomer/dev/eu-west/run
In the special case of a global stack, the configuration file should instead be named as conf/${account}_global_${stack}.yml
.
Working on stacks
You can work on stacks from their directory or from the root of the project.
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} plan
cd ${account}/${environment}/${region}/${stack}
tfwrapper plan
You can also work on several stacks sequentially with the foreach
subcommand from any directory under the root of the project.
By default, foreach
selects all stacks under the current directory,
so if called from the root of the project without any filter,
it will select all stacks and execute the specified command in them, one after another:
tfwrapper foreach -- tfwrapper init
Any combination of the -a
, -e
, -r
and -s
arguments can be used to select specific stacks,
e.g. all stacks for an account across all environments but in a specific region:
tfwrapper -a ${account} -r ${region} foreach -- tfwrapper plan
The same can be achieved with:
cd ${account}
tfwrapper -r ${region} foreach -- tfwrapper plan
Complex commands can be executed in a sub-shell with the -S
/--shell
argument, e.g.:
cd ${account}/${environment}
tfwrapper foreach -S 'pwd && tfwrapper init >/dev/null 2>&1 && tfwrapper plan 2>/dev/null -- -no-color | grep "^Plan: "'
Passing options
You can pass anything you want to terraform
using --
.
tfwrapper plan -- -target resource1 -target resource2
Environment
tfwrapper sets the following environment variables.
S3 state backend credentials
The default AWS credentials of the environment are set to point to the S3 state backend. Those credentials are acquired from the profile defined in conf/state.yml
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
Azure Service Principal credentials
Those AzureRM credentials are loaded only if you are using the Service Principal mode. They are acquired from the profile defined in ~/.azurerm/config.yml
ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_TENANT_ID
Azure authentication isolation
AZURE_CONFIG_DIR
environment variable is set to the local .run/azure
directory if global configuration value use_local_azure_session_directory
is set to true
, which is the default, which is the default.
If you have multiple configurations in your stacks, you also have <CONFIG_NAME>_AZURE_CONFIG_DIR
which is set to the local .run/azure_<config_name>
directory.
GCP configuration
Those GCP related variables are available from the environment when using the example configuration:
TF_VAR_gcp_region
TF_VAR_gcp_zone
TF_VAR_gcp_project
GKE configurations
Each GKE instance has its own kubeconfig, the path to each configuration is available from the environment:
TF_VAR_gke_kubeconfig_${gke_cluster_name}
kubeconfig is automatically fetched by the wrapper (using gcloud) and stored inside the .run
directory of your project.
It is refreshed automatically at every run to ensure you point to correct Kubernetes endpoint.
You can disable this behaviour by setting refresh_kubeconfig: never
in your cluster settings.
---
gcp:
general:
mode: adc-user
project: &gcp_project project-name
gke:
- name: kubernetes-1
zone: europe-west1-c
refresh_kubeconfig: never
Stack configurations and credentials
The terraform['vars']
dictionary from the stack configuration is accessible as Terraform variables.
The profile defined in the stack configuration is used to acquire credentials accessible from Terraform.
There is two supported providers, the variables which will be loaded depends on the used provider.
TF_VAR_client_name
(if set in .yml stack configuration file)TF_VAR_aws_account
TF_VAR_aws_region
TF_VAR_aws_access_key
TF_VAR_aws_secret_key
TF_VAR_aws_token
TF_VAR_azurerm_region
TF_VAR_azure_region
TF_VAR_azure_subscription_id
TF_VAR_azure_tenant_id
TF_VAR_azure_state_access_key
(removed in v11.0.0
)
Stack path
The stack path is passed to Terraform. This is especially useful for resource naming and tagging.
TF_VAR_account
TF_VAR_environment
TF_VAR_region
TF_VAR_stack
Development
Tests
All new code contributions should come with unit and/or integrations tests.
To run those tests locally, use tox:
poetry run tox -e py
Linters are also used to ensure code respects our standards.
To run those linters locally:
poetry run tox -e lint
Debug command-line completion
You can get verbose debugging information for argcomplete
by defining the following environment variable:
export _ARC_DEBUG=1
Python code formatting
Our code is formatted with black.
Make sure to format all your code contributions with black ${filename}
.
Hint: enable auto-format on save with black
in your favorite IDE.
Checks
To run code and documentation style checks, run tox -e lint
.
In addition to black --check
, code is also checked with:
README TOC
This README's table of content is formatted with md_toc.
Keep in mind to update it with md_toc --in-place github README.md
.
Using OpenTofu development builds
To build and use development versions of OpenTofu, put them in a ~/.terraform.d/versions/X.Y/X.Y.Z-dev/
folder:
OpenTofu v1.6.0-dev
on linux_amd64
git pre-commit hooks
Some git pre-commit hooks are configured in .pre-commit-config.yaml
for use with the pre-commit tool.
Using them helps avoiding to push changes that will fail the CI.
They can be installed locally with:
If updating hooks configuration, run checks against all files to make sure everything is fine:
Note: the pre-commit
tool itself can be installed with pip
or pipx
.
Review and merge open Dependabot PRs
Use the scripts/merge-dependabot-mrs.sh
script from master
branch to:
- list open Dependabot PRs that are mergeable,
- review, approve and merge them,
- pull changes from github and pushing them to origin.
Just invoke the script without any argument:
Check the help:
Tagging and publishing new releases to PyPI
Use the scripts/release.sh
script from master
branch to:
- bump the version with poetry,
- update
CHANGELOG.md
, - commit these changes,
- tag with last CHANGELOG.md item content as annotation,
- bump the version with poetry again to mark it for development,
- commit this change,
- push all commits and tags to all remote repositories.
This will trigger a Github Actions job to publish packages to PyPI.
To invoke the script, pass it the desired bump rule, e.g.:
For more options, check the help: