Skip to main content
Version: 5.16.0

Create an EKS Cluster

Structsure's AWS IAC uses Terragrunt to provision its infrastructure in an automated, repeatable fashion. Terragrunt is a wrapper around the popular Terraform tool, which provides some quality of life improvements relating to building infrastructure using multiple interrelated Terraform modules.

Requirements

The IaC should be deployed on a Linux-based EC2 instance or container with the following tools present in the PATH:

  • Git
  • Terraform
  • Terragrunt

These tools are provided as part of the Structsure Utility Image.

Clone the Repository

Start by cloning the Structsure AWS IAC Git repository. Note that this repository currently makes use of Git submodules, so the basic git clone command must be modified to also clone the required submodules:

git clone --recursive git@gitlab.com:structsure/breakerbar/structsure-aws-iac.git

To update or re-initialize the submodules on an already-cloned repository, run this command:

git submodule update --init
note

Git submodules are referenced using relative paths in the AWS IaC repository, so if using an offline Git mirror, the rke2-aws-tf Git repository should be at the same folder level as the structsure-aws-iac repository. Refer to the .gitmodules file for the specific configuration of each submodule.

Additionally, when using a local Git mirror, Git may need to be configured to allow cloning submodules from file. This can be done by running:

git config --global protocol.file.allow always

Configure Cluster Options

The behavior of Terraform can be modified by providing different variables to Terraform modules and inputs to Terragrunt. The AWS IaC is set up so that most of the required environment-specific configuration changes can be made in a single env.hcl file. Example configuration files for collaboration environments and deploy targets can each be found in the infra-iac/envs/ directory of the AWS IaC repository. When configuring a new cluster, make a new custom env.hcl file by copying one of the example configurations and start modifying the values therein. For more information on the differences between collaboration environments and deploy targets, see the Structsure Overview page.

A custom env.hcl file can be provided to Terragrunt by setting the TERRAGRUNT_ENV_FILE environment variable. Note that if a relative path is used, it must be the relative path from each Terragrunt module, so for best results, an absolute path should be used. If the TERRAGRUNT_ENV_FILE environment variable is not set, then Terragrunt will look recursively through the module's parent directories for a file called env-default.hcl and source it. In the IaC repository, there is a symlink located at infra-iac/env-default.hcl and may be re-pointed to the desired environment file.

The locals and inputs within the env.hcl file are sourced in each Terragrunt module's terragrunt.hcl file. When a variable under locals is also within inputs, it is passed on to all Terraform modules as an input variable (e.g., compatibility_mode = false set within locals section of the env.hcl will be passed to all Terraform modules when inputs = {compatibility_mode = "${local.compatibility_mode}"} is specified).

When creating a new cluster, the locals and remote_state sections in the env.hcl file should be reviewed and updated using values appropriate to your AWS account and VPC:

locals {
k8s_distro = "eks-cluster"

# Specifies the type of cluster (deployTarget or collab)
cluster_type = "deployTarget"

# Required AWS Configuration
aws_region = "us-east-2"
vpc_id = "vpc-123456789abcdef01"
vpc_subnet_ids = [
"subnet-123456789abcdef01",
"subnet-123456789abcdef02",
"subnet-123456789abcdef03"
]
allowed_security_group_ids = [
"sg-123456789abcdef01"
]
# If true, this flag disables some AWS features which are not available in all AWS partitions/regions.
compatibility_mode = false
# Optionally toggle add-on modules on/off
# cluster_type = "collab" automatically toggles all of these on
# modules = {
# confluence = false
# console = false
# gitlab = false
# jira = false
# keycloak = true
# loki = true
# mattermost = true
# sonarqube = true
# velero = true
# }

# Optionally provide Add-on specific configs directly
# See each add-ons documentation for available values
# confluence_inputs = {}
# console_inputs = {}
# gitlab_inputs = {}
# jira_inputs = {}
# keycloak_inputs = {}
# loki_inputs = {}
# mattermost_inputs = {}
# sonarqube_inputs = {}
# velero_inputs = {}

# Cluster specific configuration
# See eks module documentation for available values
cluster_inputs = {
cluster_name = "dt-cluster"
# Configure Auth Roles
aws_auth_roles = [
{
rolearn = "arn:aws-us-gov:iam::123456789abc:role/admin-role",
username = "AWSReservedSSO_AdministratorAccess",
groups = [
"system:masters",
]
},
]

# If you want to provide a cluster IAM role, define `cluster_iam_role` with the existing IAM role name.
# If it is left empty, Structsure will create the role to be used.
cluster_iam_role = ""
# Install Custom Root CAs
root_cas = [
{
name = "Amazon Root CA 1"
cert = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVrakNDQTNxZ0F3SUJBZ0lUQm4rVVNpb256ZlA2d3E0ckFma0k3cm5FeGpBTkJna3Foa2lHOXcwQkFRc0YKQURDQm1ERUxNQWtHQTFVRUJoTUNWVk14RURBT0JnTlZCQWdUQjBGeWFYcHZibUV4RXpBUkJnTlZCQWNUQ2xOagpiM1IwYzJSaGJHVXhKVEFqQmdOVkJBb1RIRk4wWVhKbWFXVnNaQ0JVWldOb2JtOXNiMmRwWlhNc0lFbHVZeTR4Ck96QTVCZ05WQkFNVE1sTjBZWEptYVdWc1pDQlRaWEoyYVdObGN5QlNiMjkwSUVObGNuUnBabWxqWVhSbElFRjEKZEdodmNtbDBlU0F0SUVjeU1CNFhEVEUxTURVeU5URXlNREF3TUZvWERUTTNNVEl6TVRBeE1EQXdNRm93T1RFTApNQWtHQTFVRUJoTUNWVk14RHpBTkJnTlZCQW9UQmtGdFlYcHZiakVaTUJjR0ExVUVBeE1RUVcxaGVtOXVJRkp2CmIzUWdRMEVnTVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTEo0Z0hIS2VOWGoKY2E5SGdGQjBmVzdZMTRoMjlKbG85MWdoWVBsMGhBRXZyQUl0aHRPZ1EzcE9zcVRRTnJvQnZvM2JTTWdIRnpaTQo5TzZJSThjKzZ6ZjF0Um40U1dpdzN0ZTVkamdkWVo2ay9vSTJwZVZLVnVSRjRmbjl0QmI2ZE5xY216VTVML3F3CklGQUdiSHJRZ0xLbSthL3NSeG1QVURnSDNLS0hPVmo0dXRXcCtVaG5NSmJ1bEhoZWI0bWpVY0F3aG1haFJXYTYKVk91anc1SDVTTnovMGVnd0xYMHRkSEExMTRnazk1N0VXVzY3YzRjWDhqSkdLTGhEK3JjZHFzcTA4cDhrRGkxTAo5M0ZjWG1uLzZwVUN5emlLcmxBNGI5djdMV0lieGNjZVZPRjM0R2ZJRDV5SEk5WS9RQ0IvSUlERWdFdytPeVFtCmpnU3ViSnJJcWcwQ0F3RUFBYU9DQVRFd2dnRXRNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdEZ1lEVlIwUEFRSC8KQkFRREFnR0dNQjBHQTFVZERnUVdCQlNFR015Rk5PeThESlNVTGdoWm5NZXlFRTRLQ0RBZkJnTlZIU01FR0RBVwpnQlNjWHdEZnFnSFhNQ3M0aUtLNGJVcWM4aEdSZ3pCNEJnZ3JCZ0VGQlFjQkFRUnNNR293TGdZSUt3WUJCUVVICk1BR0dJbWgwZEhBNkx5OXZZM053TG5KdmIzUm5NaTVoYldGNmIyNTBjblZ6ZEM1amIyMHdPQVlJS3dZQkJRVUgKTUFLR0xHaDBkSEE2THk5amNuUXVjbTl2ZEdjeUxtRnRZWHB2Ym5SeWRYTjBMbU52YlM5eWIyOTBaekl1WTJWeQpNRDBHQTFVZEh3UTJNRFF3TXFBd29DNkdMR2gwZEhBNkx5OWpjbXd1Y205dmRHY3lMbUZ0WVhwdmJuUnlkWE4wCkxtTnZiUzl5YjI5MFp6SXVZM0pzTUJFR0ExVWRJQVFLTUFnd0JnWUVWUjBnQURBTkJna3Foa2lHOXcwQkFRc0YKQUFPQ0FRRUFZamRDWEx3UXRUNkxMT2tNbTJ4RjRnY0Fldm5GV0F1NUNJdys3Yk1sUExWdlVPVE5OV3Fua3pTVwpNaUdwU0VTcm5PMDl0S3B6YmVSL0ZvQ0piTThvQXhpRFIzbWpFSDR3VzZ3N3NHRGdkOVFJcHVFZGZGN0F1L21hCmV5S2Rwd0FKZnF4R0Y0UGNuQ1pYbVRBNVlwYVA3ZHJlcXNYTUd6N0tRMmhzVnhhODFRNGdMdjcvd21wZExxQksKYlJSWWg1VG1PVEZmZkhQTGtJaHFoQkdXSjZidDJZRkdwbjZqY2dBS1VqNkRpQWRqZDRscEZ3ODVoZEtyQ0VWTgowRkU2L1YxZE4yUk1makN5VlNSQ25UYXdYWndYZ1dIeHl2a1FBaVNyNncxMGtZMTdSU2xRT1lpeXBvazFKUjRVCmFrY2pNUzljbXZxdG1nNWlVYVFxcWNUNU5KMGhHQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
},
{
name = "Amazon RSA 2048 M01"
cert = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lUQjNNU09BdWRab2lqT3g3WnY1ek5wbzRPRHpBTkJna3Foa2lHOXcwQkFRc0YKQURBNU1Rc3dDUVlEVlFRR0V3SlZVekVQTUEwR0ExVUVDaE1HUVcxaGVtOXVNUmt3RndZRFZRUURFeEJCYldGNgpiMjRnVW05dmRDQkRRU0F4TUI0WERUSXlNRGd5TXpJeU1qRXlPRm9YRFRNd01EZ3lNekl5TWpFeU9Gb3dQREVMCk1Ba0dBMVVFQmhNQ1ZWTXhEekFOQmdOVkJBb1RCa0Z0WVhwdmJqRWNNQm9HQTFVRUF4TVRRVzFoZW05dUlGSlQKUVNBeU1EUTRJRTB3TVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT3R4TEtuTApINGdva2pJd3I0cFhEM2kzTnlXVlZZZXNaMXlYMHlMSTJxSVVaMnQ4OEdmYTRnTXFzMVlTWGNhMVIvbG5DS2VUCmVwV1NHQSswK2ZrUU5wcC9MNEMyVDdvVFRzZGRVeDdnM1pZekJ5RFRscndTNUhSUVFxRUZFM08xVDV0RUpQNHQKZisyOElvWHNOaUV6bDNVR3ppY1lndHpqMmNXQ0I0MWVKZ0VtSm1jZjJUOFR6eks2YTYxNFpQeXEvdzRDUEFmZgpuQVY0Y296OTZuVzNBeWlFMnVodUI0elFVSVh2Z1ZTeWNXN3NiV0x2ajVURFh1bkVwTkNSd0M0a2taaks3cm9sCmp0VDJjYmI3VzJzNEJrZzNSNDJHM1BMcUJ2dDJOMzJlLzBKT1RWaUNrOC9pY2NKNHNYcXJTMXVVTjRpQjVObXYKSks3NGNzVmwrMHUwVWVjQ0F3RUFBYU9DQVZvd2dnRldNQklHQTFVZEV3RUIvd1FJTUFZQkFmOENBUUF3RGdZRApWUjBQQVFIL0JBUURBZ0dHTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3JCZ0VGQlFjREFqQWRCZ05WCkhRNEVGZ1FVZ2JnT1k0cUpFaGpsK2pzN1VKV2Y1dVdRRTRVd0h3WURWUjBqQkJnd0ZvQVVoQmpNaFRUc3ZBeVUKbEM0SVdaekhzaEJPQ2dnd2V3WUlLd1lCQlFVSEFRRUViekJ0TUM4R0NDc0dBUVVGQnpBQmhpTm9kSFJ3T2k4dgpiMk56Y0M1eWIyOTBZMkV4TG1GdFlYcHZiblJ5ZFhOMExtTnZiVEE2QmdnckJnRUZCUWN3QW9ZdWFIUjBjRG92CkwyTnlkQzV5YjI5MFkyRXhMbUZ0WVhwdmJuUnlkWE4wTG1OdmJTOXliMjkwWTJFeExtTmxjakEvQmdOVkhSOEUKT0RBMk1EU2dNcUF3aGk1b2RIUndPaTh2WTNKc0xuSnZiM1JqWVRFdVlXMWhlbTl1ZEhKMWMzUXVZMjl0TDNKdgpiM1JqWVRFdVkzSnNNQk1HQTFVZElBUU1NQW93Q0FZR1o0RU1BUUlCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCCkFRQ3RBTjRDQlNNdUJqSml0R3V4bEJia0VVRGVLL3Bad1RYdjRLcVBLMEc1MGZPSE9RQWQ4ajIxcDBjTUJnYkcKa2ZNSFZ3TFU3YjBYd1pDYXYwaDFvZ2RQTU4xS2FrSzFEVDBWd0EvK2hGdkdQSm5NVjFLeDJHNFMxWmFTazB1VQo1UWZvaVlJSWFubzAxSjVrNFQySGFwS1FtbU9oUy9pUHR1bzAwd1crSU1MZUJ1S01uM09MbjAwNWhjck9HVGFkCmhjbWV5ZmhRUDdaK2lLSHZ5b1FHaTFDMENseW1IRVR4L2NoaFFHRHlZU1dxQi9USHduTjE1QXdMUW8wRTVWOUUKU0psYmU0bUJscWVJblVzTll1Z0V4TmYrdE9peWJjcnN3Qnk4T0ZzZDM0WE9XM3JqU1V0c3VhZmQ5QVd5U2EzaAp4UlJyd3N6cnpYL1dXR202d3lCK2Y3QzQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
}
]
}
}

remote_state {
config = {
bucket = "structsure-terraform-state"
dynamodb_table = "structsure-terraform-lock"
accesslogging_bucket_name = "structsure-logging"
}
...
}

inputs = {
aws_region = "${local.aws_region}"
compatibility_mode = "${local.compatibility_mode}"
vpc_id = "${local.vpc_id}"
}

For more information on recommended values for the env.hcl, particularly values suitable for a high-availability production cluster, please refer to Production-Grade Deployment Requirements documentation.

Source Terraform Providers and Modules

note

This section is only applicable for offline deployments. In connected deployments, Terraform is able to download the providers and modules from the Internet.

Offline Terraform bundles for the AWS IaC repository are generated for each release and stored in the project's package registry. Download the appropriate bundle for your version of the AWS IaC repository, then extract the bundle to the top-level folder of your cloned AWS IaC repository using a command, such as tar xzvf BUNDLE_PATH.

The extracted bundle should create a hidden directory (viewable with ls -a) in the cloned repo called '.terragrunt-cache'. This directory should in turn contain sub-folders called 'modules' and 'providers'. The downloaded modules should automatically be used by Terragrunt if that folder exists (see the link_offline_bundle hook in infra-iac/common.hcl for implementation details), but Terragrunt will need to be directed to use the provider cache by setting the TF_PLUGIN_CACHE_DIR environment variable to the absolute path of the .terragrunt-cache/providers folder.

Configure Terragrunt S3 Backend

Terragrunt will typically be configured to use an S3 backend to store its state information. Generally, this will be configured in the env.hcl file for your environment using a block, such as the following:

remote_state {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
dynamodb_table = "DYNAMODB_TABLE"
key = "${basename(get_terragrunt_dir())}.tfstate"
}
}

Each Terragrunt module will source this remote_state configuration from the env.hcl file so that they share a common configuration, but because of the dynamically-generated key name, if multiple Terragrunt modules are run, they will each have independent states.

Terragrunt is able to configure the backend S3 bucket and DynamoDB table for you automatically if they do not yet exist, and you may see the following message when you run terragrunt init or terragrunt run-all init:

Remote state S3 bucket BUCKET_NAME does not exist or you don't have permissions
to access it. Would you like Terragrunt to create it? (y/n)

This can be expected if this is the first time Terragrunt was run with the configured backend settings; if this is the case, select yes to have Terragrunt automatically create and configure the S3 bucket and DynamoDB table for the backend. However, if this is not the first time running Terragrunt using a particular backend configuration, most likely you do not have permissions to access the configured S3 bucket. Try running an aws s3 ls s3://BUCKET_NAME from your terminal window and compare its results to the AWS Management Console. If the bucket exists within the AWS Management Console, then verify your instance role permissions and/or that your temporary credentials have not expired.

Execute the Terragrunt

Depending upon whether your cluster is destined to become a deploy target or a collaboration environment, the procedure for executing the Terragrunt is somewhat different; the collaboration environment add-ons require some additional infrastructure (RDS databases, S3 buckets, etc.), which are contained in individual Terragrunt modules so that they can be independently deployed in an a-la-carte fashion. For a deploy target, the only IaC component that needs to be deployed is the EKS cluster itself; the following example will show the steps required to build the infrastructure for a deploy target:

cd REPO_PATH
export TF_PLUGIN_CACHE_DIR="$(pwd)/.terragrunt-cache/providers/"
export TERRAGRUNT_ENV_FILE="$(pwd)/infra-iac/envs/your-custom-env.hcl"
export WORKSPACE_NAME="your-workspace-name"
cd infra-iac/eks-cluster
terragrunt init
terragrunt workspace select -or-create $WORKSPACE_NAME
terragrunt plan
terragrunt apply

Note the following:

  • As mentioned above, the TF_PLUGIN_CACHE_DIR must be configured with the absolute path of the Terraform provider bundle if running in a disconnected environment. This is not required in a connected environment, but it may speed up the init process if initializing multiple modules (as for a collaboration environment), due to the shared cache.
  • If using a custom Terragrunt configuration file, the absolute path should be specified using the TERRAGRUNT_ENV_FILE environment variable. If not specified, Terragrunt will look for a file called 'env-default.hcl' in the parent directories.
  • The workspace name is arbitrary, although in development, it will generally correlate to a branch name. Multiple unique clusters can be provisioned using the same Terraform state bucket by using distinct workspaces for each; if doing this, make sure that the workspace allows you to easily identify each cluster.
  • It is not strictly required to plan your changes before running terragrunt apply, but you should generally do so in order to preview the changes to be made before proceeding. This is especially important when upgrading to modifying already-deployed infrastructure.
  • These steps align with the commands run automatically during the AWS IaC CI Pipelines, so they may serve as a useful reference.

Outputs

Terragrunt will store several useful values as outputs; these can be viewed using the terragrunt output command. For example, to obtain the generated EKS cluster name and use it to configure your workstation's kubeconfig for local access, you may run the following:

cd REPO_PATH/infra-iac/
CLUSTER_NAME=$(terragrunt run-all output -json | jq -nr 'reduce inputs as $i ({}; . * $i) | .cluster_name.value')
aws eks update-kubeconfig --name ${CLUSTER_NAME}

This will allow you to run kubectl commands in the context of the cluster.

If multiple Terragrunt modules have been deployed, as for a collaboration environment, the outputs for all of them can be combined using a command, such as the following:

cd REPO_PATH/infra
terragrunt run-all output -json | jq -n 'reduce inputs as $i ({}; . * $i)' >
outputs.json

Additionally, Terragrunt will also create files (by default in the infra-iac/outputs directory) matching several of the Terraform outputs. These will be consumed during the deployment of the Structsure package.