Skip to main content
Version: 5.14.0

Structsure Edge User Manual

This document offers guidance on how to maintain and operate Structsure Edge on an on-premise Amazon Web Services (AWS) Snowball Edge.

Structsure Edge Deploy Target is a Kubernetes cluster that is used to deploy and run customer applications in an on-premise denied, disrupted, intermittent latency (DDIL) production environment. This cluster hosts the application that is used by end-users, so Production Deployment Targets must be highly available and performant.

System Overview

Cluster Deployment Details

A default Structsure Edge cluster of five Snowballs consists of three control plane nodes, four to ten agent nodes, and one utility node to facilitate operational tasks, which include managing the lifecycle of the cluster and its workloads.

Cluster Nodes Layout

The following is an example of a default cluster layout (this will change based on your specific configuration and number of Snowballs).

SnowballControl Plane NodeAgent NodeUtility Node
Snowball 0Control Plane 0Agent (0-1)N/A
Snowball 1Control Plane 1Agent (2-3)N/A
Snowball 2Control Plane 2Agent (4-5)N/A
Snowball 3N/AAgent (6-7)Utility Node
Snowball 4N/AAgent (8-9)N/A

Storage

The Amazon Elastic Block Store (EBS) Application Programming Interface (API) on Amazon Snowball Edge devices is limited, and there is no option to replicate Amazon EBS data between devices. Due to this limitation, volumes cannot be resized after they are created. Each node is provisioned with a 120GB root volume by default.

caution

EBS cannot resize volumes. Many services through AWS on the Snowball are rudimentary compared to AWS cloud.

danger

If these volumes fill up for any reason, the node will either need to be recreated with a larger data disk, or alternatively, have an additional disk attached, partitioned, formatted, and mounted.

To support replicated block storage and read-write-many (RWX) storage, Rook-Ceph is deployed. An additional block device mapping is attached to each agent node, allocating by default 3.5TB of storage space to Rook-Ceph. This disk is not partitioned, formatted, or mounted into the host operating system. Rook-Ceph is configured to use this disk as raw block storage and performs all of the necessary operations to prepare the disk for use by Kubernetes workloads. The result is approximately 50TB of replicated storage that can be provisioned to applications. If required, this could be increased by mounting additional disks to the agent nodes and reconfiguring the Rook-Ceph cluster.

Connecting to Nodes via SSH

The node OS can be accessed via SSH. Each Snowball device has been provisioned with a unique key-pair that must be used to authenticate to instances. The correct private key must be used or authentication will fail. These keys are sensitive and should be carefully stored and handled. The default user name for Rocky OS is rocky.

ssh -i /path/to/structsure-kp-0.pem rocky@control-plane-0.example.com

Quorum

Since this type of Structsure Edge deployment consists of three control plane nodes, it can withstand the loss of a single control plane node at any given time without any loss of functionality. Care must be taken to ensure the majority of these control plane nodes are operational at any given time to prevent the loss of quorum. More information on this topic, and Etcd in general, can be found in Etcd Documentation. If the Snowball device must be disconnected from the network or powered off, care must be taken to prevent quorum loss.

danger

To be clear, do not disconnect or power off any of the Snowballs that contain a control plane at the same time. It is highly recommended to only disconnect or power off one Snowball device at a time.

Quorum Recovery

In the event of a quorum loss, recovery is possible, and the cluster can be brought back to a healthy state by following the steps below. RKE2 automatically creates Etcd backups and stores them locally on the control plane file system. These backups can be found in the /var/lib/rancher/rke2/server/db/snapshots directory.

  1. Stop the RKE2 Server service on all control plane nodes.
sudo systemctl stop rke2-server

Stopping Nodes

If a node must be stopped for any reason, the following steps should be performed to ensure workloads are properly re-scheduled.

note

This should only be performed on a single control plane node at a time. See the Quorum section for additional details.

Multiple agent nodes can be stopped at the same time without impacting the functionality of the cluster; however, certain workloads may not be able to be rescheduled until the nodes are restarted, resulting in application outages. Brief interruptions to the Kubernetes API server are probable when stopping a control plane node, and brief interruptions and/or connection resets to applications are probable when stopping agent nodes.

  1. Observe the status of nodes.
sudo kubectl get node
  1. Drain and cordon the node.
sudo kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data

Wait for the output to indicate the node has been drained. If the cluster is unhealthy, there is the possibility this command will never complete. The output will indicate the inability to evict workload pods in this scenario.

  1. Stop the RKE2 Service.
sudo rke2-killall.sh

At this point, the node is safe to power off or restart.

Utility Node Information

A utility node was provisioned to facilitate performing operational tasks. This node has DockerCE enabled and running. A container image containing an array of Kubernetes related tools has been imported and can be interactively executed.

sudo docker run --rm -it -v$(pwd):/work edge-util

Once an interactive shell on the container has been started, the preloaded tools can be used, as depicted by the following examples:

Export KUBECONFIG=/work/rke2-ansible/connection/rke2.yaml
kubectl get node
helm list -A
kustomize build /work/my-project
flux reconcile hr -n my-namespace my-helm-release
ansible-playbook -i mysite.yaml /work/my-ansible-project
aws configuresnowballEdge configure

Adding Container Images

An on-cluster container registry was provisioned as part of the deployment process. While its primary purpose is cluster bootstrapping, it can be used to store and serve application images.

  1. Copy an exported image tar to the utility node.
scp -i /path/to/structsure-kp-2.pem my-image.tar rocky@utility-node.example.com:~/
  1. Load the image into the local docker image cache on the utility node. Be sure to copy the full image name after the load has completed, as it will be required for the next step.
sudo docker image load -i my-image.tar
  1. Push the image using zarf.

Follow the Zarf instructions to push to the registry.

zarf tools registry push image.tar 127.0.0.1:31999/stefanprodan/podinfo:6.4.0

Docker on the utility node should already be authenticated to the registry. If the registry credentials are required for any reason, they can be obtained by issuing the following commands from the utility node:

# from the host OS
sudo docker exec --rm -it $(pwd):/work edge-util
# from the edge-util container
export KUBECONFIG=/work/rke2-ansible/connection/rke2.yaml
zarf tools get-creds

Adding Git Repositories

An on-cluster Git server (Gitea) was provisioned as part of the deployment process. The primary purpose is cluster bootstrapping, but it can serve as a Git server for application deployment manifests, as well, if desired.

# from the host OS
sudo docker exec --rm -it $(pwd):/work edge-util
# from the edge-util container
export KUBECONFIG=/work/rke2-ansible/connection/rke2.yaml
# note down the zarf-git-user credentials for later
zarf tools get-creds
zarf connect git --cli-only --local-port 8080 > tunnel.log 2>&1 & tail -f tunnel.log
# Wait for the tunnel to connect
# Note the resulting URL
# Example Output:
# Saving log file to: /var/folders/j_/0hh7jh455tvgdmt8nw4kyqsh0000gn/T/zarf-2023-06-28-16-45-50-1451654622.log
# ⠙ Tunnel established at http://127.0.0.1:8080, waiting for user to interrupt (ctrl-c to end)
# Stop tailing the log by pressing ctrl+c
cd /work/my-git-repo
git remote add gitea http://127.0.0.1:8080/zarf-git-user/my-project.git
git push -u gitea my-branch
# Provide the zarf-git-user credentials when prompted.
# disconnect the tunnel when you're done
killall zarf

Using External Resources

By default, a mutating webhook will alter any image URI and any Flux GitRepository URL to use the on-cluster container registry and Git repository. If this behavior is not desired, a label can be added to the namespace to disable the webhook for all resources within the namespace.

kubectl create namespace my-namespace
kubectl label namespace my-namespace zarf.dev/agent=ignore

Checking Ceph Status

If an event happens that causes an agent node to be restarted, the health of the Ceph cluster should be evaluated. As part of the Rook-Ceph deployment, a Ceph tools pod was provisioned with the required tooling and credentials to perform Ceph maintenance operations. The overall status of the cluster can be checked by issuing the following commands:

# Get the name of the ceph tools pod
kubectl get po -n rook-cluster -lapp=rook-ceph-tools
# Issue ceph status command
kubectl exec -n rook-cluster <pod name> -- ceph status

Example output:

cluster: 
id: <cluster id>
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c
mgr: x(active)
mds: cephfs_a-1/1/1 up {0=a=up:active}, 2 up:standby
osd: 3 osds: 3 up, 3 in
data:
pools: 2 pools, 16 pgs
objects: 21 objects, 2.19K
usage: 546 GB used, 384 GB / 931 GB avail
pgs: 16 active+clean

Virtual IPs

Due to the lack of load balancing on Snowball Edge devices, a Kube-VIP controller was deployed to the cluster. This controller will assign "external" IP addresses to Kubernetes services of the LoadBalancer type. This controller will also use ARP to move this virtual IP between the nodes in the event of a node failure.

The Istio Ingress Gateway service has an IP address assigned to it. If additional services require an IP address, they can be added by editing the kubevip configmap in the kube-system namespace. An example configmap can be found below. Please note that removing or changing the IP address assigned to the istio-system namespace will cause the applications on the cluster to be inaccessible until the DNS records are manually updated.

apiVersion: v1
kind: ConfigMap
metadata:
name: kubevip
namespace: kube-system
data:
cidr-istio-system: 192.168.1.11/32
cidr-my-namespace: 192.168.1.15/32