Skip to main content
Version: 5.17.1

Create an RKE2 Cluster

Structsure's AWS IAC uses Terragrunt to provision its infrastructure in an automated, repeatable fashion. Terragrunt is a wrapper around the popular Terraform tool, which provides some quality of life improvements relating to building infrastructure using multiple interrelated Terraform modules.

Requirements

The IaC should be deployed on a Linux-based EC2 instance or container with the following tools present in the PATH:

  • Git
  • Terraform
  • Terragrunt

These tools are provided as part of the Structsure Utility Image.

Clone the Repository

Start by cloning the Structsure AWS IAC Git repository. Note that this repository currently makes use of Git submodules, so the basic git clone command must be modified to also clone the required submodules:

git clone --recursive git@gitlab.com:structsure/breakerbar/structsure-aws-iac.git

To update or re-initialize the submodules on an already-cloned repository, run this command:

git submodule update --init
note

Git submodules are referenced using relative paths in the AWS IaC repository, so if using an offline Git mirror, the rke2-aws-tf Git repository should be at the same folder level as the structsure-aws-iac repository. Refer to the .gitmodules file for the specific configuration of each submodule.

Additionally, when using a local Git mirror, Git may need to be configured to allow cloning submodules from file. This can be done by running:

git config --global protocol.file.allow always

Configure Cluster Options

The behavior of Terraform can be modified by providing different variables to Terraform modules and inputs to Terragrunt. The AWS IaC is set up so that most of the required environment-specific configuration changes can be made in a single env.hcl file. Example configuration files for collaboration environments and deploy targets can each be found in the infra-iac/envs/ directory of the AWS IaC repository. When configuring a new cluster, make a new custom env.hcl file by copying one of the example configurations and start modifying the values therein. For more information on the differences between collaboration environments and deploy targets, see the Structsure Overview page.

A custom env.hcl file can be provided to Terragrunt by setting the TERRAGRUNT_ENV_FILE environment variable. Note that if a relative path is used, it must be the relative path from each Terragrunt module, so for best results, an absolute path should be used. If the TERRAGRUNT_ENV_FILE environment variable is not set, then Terragrunt will look recursively through the module's parent directories for a file called env-default.hcl and source it. In the IaC repository, this is a symlink located at infra-iac/env-default.hcl and may be re-pointed to the desired environment file.

The locals and inputs within the env.hcl file are sourced in each Terragrunt module's terragrunt.hcl file. When a variable under locals is also within inputs, it is passed on to all Terraform modules as an input variable (e.g., compatibility_mode = false set within locals section of the env.hcl will be passed to all Terraform modules when inputs = {compatibility_mode = "${local.compatibility_mode}"} is specified).

When creating a new cluster, the locals and remote_state sections in the env.hcl file should be reviewed and updated using values appropriate to your AWS account and VPC:

locals {
k8s_distro = "rke2-cluster"

# Specifies the type of cluster (deployTarget or collab)
cluster_type = "deployTarget"

# Required AWS Configuration
aws_region = "us-east-2"
vpc_id = "vpc-123456789abcdef01"
vpc_subnet_ids = [
"subnet-123456789abcdef01",
"subnet-123456789abcdef02",
"subnet-123456789abcdef03"
]
allowed_security_group_ids = [
"sg-123456789abcdef01"
]
# If true, this flag disables some AWS features which are not available in all AWS partitions/regions.
compatibility_mode = false
# Optionally toggle add-on modules on/off
# cluster_type = "collab" automatically toggles all of these on
# modules = {
# confluence = false
# console = false
# gitlab = false
# jira = false
# keycloak = true
# loki = true
# mattermost = true
# sonarqube = true
# velero = true
# }

# Optionally provide Add-on specific configs directly
# See each add-ons documentation for available values
# confluence_inputs = {}
# console_inputs = {}
# gitlab_inputs = {}
# jira_inputs = {}
# keycloak_inputs = {}
# loki_inputs = {}
# mattermost_inputs = {}
# sonarqube_inputs = {}
# velero_inputs = {}

# Cluster specific configuration
# See eks module documentation for available values
cluster_inputs = {
cluster_name = "env-collab"
cp_ami = "ami-123456789abcdef01"

# If you want to provide a role, define cluster_cp_iam_role and cluster_agent_iam_role for control plane
# and agent node roles, respectively.
# If left blank, Structsure will create roles to be used.
cluster_cp_iam_role = "your-role-here"
cluster_agent_iam_role = "your-role-here"
}
vpc_id = "vpc-123456789abcdef01"
vpc_subnet_ids = [
"subnet-123456789abcdef01",
"subnet-123456789abcdef02",
"subnet-123456789abcdef03"
]
...
}

remote_state {
config = {
bucket = "structsure-terraform-state"
dynamodb_table = "structsure-terraform-lock"
accesslogging_bucket_name = "structsure-logging"
}
...
}

inputs = {
aws_region = "${local.aws_region}"
compatibility_mode = "${local.compatibility_mode}"
vpc_id = "${local.vpc_id}"
}

For more information on recommended values for the env.hcl, particularly values suitable for a high-availability production cluster, please refer to Production-Grade Deployment Requirements documentation.

Source Terraform Providers and Modules

note

This section is only applicable for offline deployments. In connected deployments, Terraform is able to download the providers and modules from the Internet.

Offline Terraform bundles for the AWS IaC repository are generated for each release and stored in the project's package registry. Download the appropriate bundle for your version of the AWS IaC repository, then extract the bundle to the top-level folder of your cloned AWS IaC repository using a command like tar xzvf BUNDLE_PATH.

The extracted bundle should create a hidden directory (viewable with ls -a) in the cloned repo called '.terragrunt-cache'. This directory should in turn contain sub-folders called 'modules' and 'providers'. The downloaded modules should automatically be used by Terragrunt if that folder exists (see the link_offline_bundle hook in infra-iac/common.hcl for implementation details), but Terragrunt will need to be directed to use the provider cache by setting the TF_PLUGIN_CACHE_DIR environment variable to the absolute path of the .terragrunt-cache/providers folder.

Configure Terragrunt S3 Backend

Terragrunt will typically be configured to use an S3 backend to store its state information. Generally, this will be configured in the env.hcl file for your environment using a block, such as the following:

remote_state {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
dynamodb_table = "DYNAMODB_TABLE"
key = "${basename(get_terragrunt_dir())}.tfstate"
}
}

Each Terragrunt module will source this remote_state configuration from the env.hcl file so that they share a common configuration, but because of the dynamically-generated key name, if multiple Terragrunt modules are run, they will each have independent states.

Terragrunt is able to configure the backend S3 bucket and DynamoDB table for you automatically if they do not yet exist. As such, you may see the following message when you run terragrunt init or terragrunt run-all init:

Remote state S3 bucket BUCKET_NAME does not exist or you don't have permissions
to access it. Would you like Terragrunt to create it? (y/n)

This may be expected if this is the first time that Terragrunt has been run with the configured backend settings; if this is the case, select yes to have Terragrunt automatically create and configure the S3 bucket and DynamoDB table for the backend. However, if this is not the first time running Terragrunt using a particular backend configuration, most likely you do not have permissions to access the configured S3 bucket. Try running an aws s3 ls s3://BUCKET_NAME from your terminal window and compare its results to the AWS Management Console. If the bucket exists within the AWS Management Console, then verify your instance role permissions and/or that your temporary credentials have not expired.

Execute the Terragrunt

Depending upon whether your cluster is destined to become a deploy target or a collaboration environment, the procedure for executing the Terragrunt is somewhat different; the collaboration environment add-ons require some additional infrastructure (RDS databases, S3 buckets, etc.), which are contained in individual Terragrunt modules so that they can be independently deployed in an a-la-carte fashion. For a deploy target, the only IaC component that needs to be deployed is the RKE2 cluster itself; the following example will show the steps required to build the infrastructure for a deploy target:

cd REPO_PATH
export TF_PLUGIN_CACHE_DIR="$(pwd)/.terragrunt-cache/providers/"
export TERRAGRUNT_ENV_FILE="$(pwd)/infra-iac/envs/your-custom-env.hcl"
export WORKSPACE_NAME="your-workspace-name"
cd infra-iac/rke2-cluster
terragrunt init
terragrunt workspace select -or-create $WORKSPACE_NAME
terragrunt plan
terragrunt apply

Note the following:

  • As mentioned above, the TF_PLUGIN_CACHE_DIR must be configured with the absolute path of the Terraform provider bundle if running in a disconnected environment. This is not required in a connected environment, but it may speed up the init process if initializing multiple modules (as for a collaboration environment), due to the shared cache.
  • If using a custom Terragrunt configuration file, the absolute path should be specified using the TERRAGRUNT_ENV_FILE environment variable. If not specified, Terragrunt will look for a file called 'env-default.hcl' in the parent directories.
  • The workspace name is arbitrary, although in development, it will generally correlate to a branch name. Multiple unique clusters can be provisioned using the same Terraform state bucket by using distinct workspaces for each; if doing this, make sure that the workspace allows you to easily identify each cluster.
  • It is not strictly required to plan your changes before running terragrunt apply, but you should generally do so in order to preview the changes to be made before proceeding. This is especially important when upgrading to modifying already-deployed infrastructure.
  • These steps align with the commands run automatically during the AWS IaC CI Pipelines, so they may serve as a useful reference.

Outputs

Terragrunt will store several useful values as outputs; these can be viewed using the terragrunt output command. For example, to configure your local terminal to use the cluster's kubeconfig file, you may run the following:

cd REPO_PATH/infra-iac/rke2-cluster
KUBECONFIG_URL=$(terragrunt output -raw kubeconfig_url)
aws s3 cp $KUBECONFIG_URL ~/.kube/config

This will allow you to run kubectl commands in the context of the cluster.

If multiple Terragrunt modules have been deployed, as for a collaboration environment, the outputs for all of them can be combined using a command, such as the following:

cd REPO_PATH/infra
terragrunt run-all output -json | jq -n 'reduce inputs as $i ({}; . * $i)' >
outputs.json

Additionally, Terragrunt will also create files (by default in the infra-iac/outputs directory) matching several of the Terraform outputs. These will be consumed during the deployment of the Structsure package.