Skip to content

Commit

Permalink
Merge pull request #37 from aws-controllers-k8s/iamahgoub/cluster-mgm…
Browse files Browse the repository at this point in the history
…t-example

Add EKS cluster management example
  • Loading branch information
a-hilaly authored Oct 7, 2024
2 parents 8b899c1 + 9df0992 commit 892e1cf
Show file tree
Hide file tree
Showing 19 changed files with 1,180 additions and 0 deletions.
162 changes: 162 additions & 0 deletions examples/eks-cluster-mgmt/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
# Amazon EKS cluster management using Symphony & ACK
This example demonstrates how to manage a fleet of EKS clusters using Symphony, ACK, and ArgoCD -- it creates EKS clusters, and bootstraps them with the required add-ons

A hub-spoke model is used in this example; a management cluster (hub) is created as part of the initial setup and the controllers needed for provisioning and bootstrapping workload clusters (spokes) are installed on top.


**NOTE:** As this example evolves, some of the instructions below will be detailed further (e.g. the creation of the management cluster), others (e.g. controllers installation) will be automated via the GitOps flow.

## Instructions
### Environment variables

1. Use the snippet below to set environment variables. Replace the placeholders first (surrounded with`<>`):
```sh
export SYMPHONY_REPO_URL="https://github.com/aws-controllers-k8s/private-symphony.git"
export WORKSPACE_PATH=<workspace-path> #the directory where repos will be cloned e.g. ~/environment/
export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
export AWS_REGION=<region> #e.g. us-west-2
export CLUSTER_NAME=mgmt
export ARGOCD_CHART_VERSION=7.5.2
```

### Management cluster
2. Create an EKS cluster (management cluster)
3. Create IAM OIDC provider for the cluster:
```sh
eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve
```
4. Save OIDC provider URL in an environment variable:
```sh
OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
```
5. Install the following ACK controllers on the management cluster:
- ACK IAM controller
- ACK EC2 controller
- ACK EKS controller
6. Install Symphony on the management cluster. Please note that this example is tested on 0.1.0-rc.3.
7. Install EKS pod identity add-on:
```sh
aws eks create-addon --cluster-name $CLUSTER_NAME --addon-name eks-pod-identity-agent --addon-version v1.0.0-eksbuild.1
```
### Repo
8. Clone Symphony repo:
```sh
git clone $SYMPHONY_REPO_URL $WORKSPACE_PATH/symphony
```

9. Create the GitHub repo `cluster-mgmt` in your organization; it will contain the clusters definition, and it will be reconciled to the management cluster via the GitOps flow

**NOTE:** Until Symphony is released, make sure the repo you create is private.

10. Save the URL of the created repo in an environment variable:
```sh
export MY_REPO_URL=<repo-url> #e.g. https://github.com/iamahgoub/cluster-mgmt.git
```

11. Clone the created repo:
```sh
git clone $MY_REPO_URL $WORKSPACE_PATH/cluster-mgmt
```
12. Populate the repo:
```sh
cp -r $WORKSPACE_PATH/symphony/examples/cluster-mgmt/* $WORKSPACE_PATH/cluster-mgmt
find /path/to/directory -type f -exec sed -i "s/search_string/$REPLACE_STRING/g" {} +

find $WORKSPACE_PATH/cluster-mgmt -type f -exec sed -i "s~ACCOUNT_ID~$ACCOUNT_ID~g" {} +
find $WORKSPACE_PATH/cluster-mgmt -type f -exec sed -i "s~MY_REPO_URL~$MY_REPO_URL~g" {} +
find $WORKSPACE_PATH/cluster-mgmt -type f -exec sed -i "s~AWS_REGION~$AWS_REGION~g" {} +
find $WORKSPACE_PATH/cluster-mgmt -type f -exec sed -i "s~CLUSTER_NAME~$CLUSTER_NAME~g" {} +
find $WORKSPACE_PATH/cluster-mgmt -type f -exec sed -i "s~OIDC_PROVIDER~$OIDC_PROVIDER~g" {} +
```
13. Push the changes
```sh
cd $WORKSPACE_PATH/cluster-mgmt
git add .
git commit -m "initial setup"
git push
cd $WORKSPACE_PATH
```

### ArgoCD installation
14. Create an IAM role for ArgoCD on the management cluster and associated with ArgoCD `ServiceAccount`:
```sh
cat >argocd-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sts:AssumeRole",
"sts:TagSession"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF

aws iam create-policy --policy-name argocd-policy --policy-document file://argocd-policy.json

cat >argocd-trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
EOF

aws iam create-role --role-name argocd-hub-role --assume-role-policy-document file://argocd-trust-relationship.json --description ""
aws iam attach-role-policy --role-name argocd-hub-role --policy-arn=arn:aws:iam::$ACCOUNT_ID:policy/argocd-policy

aws eks create-pod-identity-association --cluster-name $CLUSTER_NAME --role-arn arn:aws:iam::$ACCOUNT_ID:role/argocd-hub-role --namespace argocd --service-account argocd-application-controller
```
15. Install ArgoCD helm chart:
```sh
helm repo add argo-cd https://argoproj.github.io/argo-helm
helm upgrade --install argocd argo-cd/argo-cd --version $ARGOCD_CHART_VERSION \
--namespace "argocd" --create-namespace \
--set server.service.type=LoadBalancer \
--wait
```

### Bootstrapping

16. Create ArgoCD `Repository` resource that points to `cluster-mgmt` repo created in an earlier instruction
17. Apply the bootstrap ArgoCD application:
```sh
kubectl apply -f $WORKSPACE_PATH/cluster-mgmt/gitops/bootstrap.yaml
```
The initial configuration creates one workload cluster named `workload-cluster1`. Feel free to add more by editing the configurations at `clusters/`.


## Clean-up
1. Delete ArgoCD bootstrap application, and wait for workload clusters and hosting VPCs to be deleted:
```sh
kubectl delete application bootstrap -n argocd
```
2. Uninstall ArgoCD helm chart
```sh
helm uninstall argocd -n argocd
```
3. Delete ArgoCD IAM role and policy
```sh
aws iam delete-role --role-name argocd-hub-role
```
4. Delete ArgoCD IAM policy
```sh
aws iam delete-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/argocd-policy
```
5. Delete ACK controllers and Symphony
6. Delete the management cluster
23 changes: 23 additions & 0 deletions examples/eks-cluster-mgmt/charts/karpenter-iam/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
19 changes: 19 additions & 0 deletions examples/eks-cluster-mgmt/charts/karpenter-iam/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: v2
name: karpenter-iam
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.0.0

66 changes: 66 additions & 0 deletions examples/eks-cluster-mgmt/charts/karpenter-iam/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "resources.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "resources.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "resources.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "resources.labels" -}}
helm.sh/chart: {{ include "resources.chart" . }}
{{ include "resources.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "resources.selectorLabels" -}}
app.kubernetes.io/name: {{ include "resources.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "resources.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "resources.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

{{- define "toValidName" -}}
{{- printf "%s" . | regexReplaceAll "[^a-z0-9.-]" "-" | lower -}}
{{- end -}}
Loading

0 comments on commit 892e1cf

Please sign in to comment.