diff --git a/streaming/redpanda/README.md b/streaming/redpanda/README.md new file mode 100644 index 000000000..543b4faf1 --- /dev/null +++ b/streaming/redpanda/README.md @@ -0,0 +1,259 @@ +# Redpanda on Amazon EKS +**** +Redpanda is a simple, powerful, and cost-efficient streaming data platform that is compatible with Kafka APIs but much less complex, faster and more affordable. Lets take a look at how we can deploy this on Amazon EKS! +**** +![redpanda.png](redpanda.png) +## Prerequisites + +This guide also has the following prerequisites for setting up and deploying Redpanda in EKS. + +* Terraform +* AWS-CLI +* Kubectl +* Helm +* jq + +**** + +## Terraform +**** +Terraform is an infrastructure as code tool that enables you to safely and predictably provision and manage infrastructure in any cloud. Install the latest. + +https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli + +**** + +## AWS-CLI +**** +AWS-CLI is a command line tool for managing AWS resources, install the latest version. + +https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html +**** + +## Kubectl + +Kubectl is a command line tool that is used to communicate with the Kubernetes API server. Install the latest kubectl for your platform from here: + +https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html + +**** + +## Helm + +Helm is used as the Kubernetes package manager for deploying Redpanda. Install the latest version of Helm for your platform from here: + +https://helm.sh/docs/intro/install/ + +**** + +Provider Versions Tested +| Provider | Version | +| ----------- | ----------- | +| Terraform | v1.7.2 | +| Helm | v3.14.0 | +| Redpanda | v5.7.22 | + +## Setting up the Cluster + +The Redpanda deployment documentation was used extensively in the creation of this guide. You can view the Redpanda guide at the following location: + +https://docs.redpanda.com/current/deploy/deployment-option/self-hosted/kubernetes/eks-guide/ + +In setting up and configuring our Redpanda cluster in AWS, we are using an Amazon EKS cluster as described in the guide linked above. + +We will use Terraform to deploy our AWS Resources (VPC, EKS, EKS add-ons) and our Redpanda Cluster, clone the files from Git. + + +To stand up the Amazon EKS Cluster and Infrastrucuture, run Terraform init, plan and apply. +``` +terraform init +terraform plan +terraform apply --autor-approve +``` + +After some time, you should see output that includes the message “Congratulations on installing redpanda!”, along with information on your cluster as well as a number of sample commands that you can use. + +Create Access to your Amazon EKS Cluster +``` +aws eks --region us-west-2 update-kubeconfig --name doeks-redpanda +``` +Verify Cluster +``` +kubectl get nodes +``` +You should see 6 worked nodes, 3 deployed for EKS Add-ons (Cert-Manager, AWS Load Balancer Controller, Monitoring Tools, etc). + +## Configuring Environment + +In order to utilize our Redpanda Cluster we will need to create some items: + +* Export the external certificate that was generated during the prior steps from the Redpanda cluster, and store it in AWS Secrets Manager +* Get our superuser password from AWS Secrets Manager and Setup Environment Variables + + +We will dive deeper on these steps in the following sections. + +## 1/ Export the Certificate and Store into AWS Secrets Manager + +Export ourt Cert from Kubernetes, this will be used to access the Redpanda Nodes. + +``` +kubectl get secret -n redpanda redpanda-external-cert -o go-template='{{ index .data "ca.crt" | base64decode }}' > ca.crt +``` + +Now we will import this certificate into AWS Secrets Manager so we can use it later + +``` +aws secretsmanager create-secret --name redpandarootcert --secret-string file://ca.crt + +Verify + +aws secretsmanager get-secret-value --secret-id redpandarootcert --output json | jq .SecretString +``` + +Alternatively this can be done with ACM, please check out this document on how: + +## 2/ Setup Environment Variables for Redpanda Cluster + +Create Environment Variables, we will pull aws secretsmanager secret we created with Terraform for superuser of cluster, then configure our topic username of "redpanda-twitch-account" and password "changethispassword" which will be used later for testing topics. This is for lab purposes only and you should secure this password. +``` +SUPUSER="superuser" +SUPPASS=`aws secretsmanager get-secret-value --secret-id redpanda_password-1234 --query "SecretString" --output text` +REGUSER="redpanda-twitch-account" +REGPASS="changethispassword" + +``` + +## Creating a User + +Now that we have set up and deployed the Redpanda cluster, we will need to create a user. To create the user, determine a username and password you would like to use, and use the following command to create the user: + +``` +kubectl --namespace redpanda exec -ti redpanda-0 -c redpanda -- \ +rpk acl user create $REGUSER \ +-p $REGPASS +``` + +You should see the following output confirming successful creation of the user: + +``` +Created user "". +``` + +Save the user name and password in a separate text file for later when SAM is used to deploy the remaining architecture. + +## Creating a Topic + +Next, we will use the superuser to grant the newly created user permission to execute all operations for a topic called twitch-chat. Feel free to use the topic name of your choice: + +``` +kubectl exec --namespace redpanda -c redpanda redpanda-0 -- \ + rpk acl create --allow-principal User:$REGUSER \ + --operation all \ + --topic twitch-chat \ + -X user=$SUPUSER -X pass=$SUPPASS -X sasl.mechanism=SCRAM-SHA-512 +``` + +You should then see output similar to the following: + +``` +PRINCIPAL HOST RESOURCE-TYPE RESOURCE-NAME RESOURCE-PATTERN-TYPE OPERATION PERMISSION ERROR +User:redpanda-twitch-account * TOPIC twitch-chat LITERAL ALL ALLOW +``` + +In the following steps, we are going to use the newly created user account to create the topic, produce messages to the topic, and consumer messages to the topic. + +First, we will create an alias to simplify the usage of the rpk commands that will be used to work with the Redpanda deployment. Use the following command to configure the alias: + +``` +alias internal-rpk="kubectl --namespace redpanda exec -i -t redpanda-0 -c redpanda -- rpk -X user=$REGUSER -X pass=$REGPASS -X sasl.mechanism=SCRAM-SHA-256" +``` + +Next, create the topic “twitch-chat” with the following command: + +``` +internal-rpk topic create twitch-chat +``` + +You should see the following output after executing the above command: + +``` +TOPIC STATUS +twitch-chat OK +``` + +View the details of the topic just created by executing the following command: + +``` +internal-rpk topic describe twitch-chat +``` + +## Produce and Consume Messages + +Now use the following command to interactively produce messages to the topic: + +``` +internal-rpk topic produce twitch-chat +``` + +Type in some text and press enter to publish the message. After publishing several messages, use ctrl+C to end the publishing command. + +The output should look something like the following: + +``` +hello world +Produced to partition 0 at offset 0 with timestamp 1702851801374. +hello world 2 +Produced to partition 0 at offset 1 with timestamp 1702851806788. +hello world 3 +Produced to partition 0 at offset 2 with timestamp 1702851810335. +hello world 4 +Produced to partition 0 at offset 3 with timestamp 1702851813904. +^Ccommand terminated with exit code 130 +``` + +Next, use the following command to consume one message from the topic: + +``` +internal-rpk topic consume twitch-chat --num 1 +``` + +The output should look similar to the following: + +``` +{ + "topic": "twitch-chat", + "value": "hello world", + "timestamp": 1702851801374, + "partition": 0, + "offset": 0 +} +``` +## Accessing the Redpanda Console + +Having verified that you can produce and consume messages, next we will access the Redpanda Console by port forwarding to our localhost. This can be done using the following command: + +``` +kubectl --namespace redpanda port-forward svc/redpanda-console 8080:8080 +``` + +**Note:** If you are using Cloud9, you will need to use the following alternate command to do the port forwarding: + +``` +kubectl --namespace redpanda port-forward --address 0.0.0.0 svc/redpanda-console 8080:8080 +``` + +You will also need to allow traffic on port 8080 coming from your IP address to your localhost. If you are using Cloud9 as described in this guide, you will need to edit the security group of your Cloud9 instance to allow port 8080 inbound with a source of “My IP”. + +Do not allow full public access to port 8080, as the Redpanda Community Edition license does not enable authentication on the Redpanda Console. + +Once you are able to access the Redpanda Console, you can view information about your brokers, IP addresses and IDs, as well as information on your topics. You can view the messages produced to your topics, and produce additional messages to topics using the web interface. + + +## Conclusion + +With the Redpanda cluster now configured in EKS, you can now dive deeper by integrating with other AWS services, Creating Consumer and Producers. + + +* * * +* * * diff --git a/streaming/redpanda/addons.tf b/streaming/redpanda/addons.tf new file mode 100644 index 000000000..8dbd2db3e --- /dev/null +++ b/streaming/redpanda/addons.tf @@ -0,0 +1,235 @@ + +################################################################################ +# EKS Addons +################################################################################ +#--------------------------------------------------------------- +# IRSA +#--------------------------------------------------------------- + +module "ebs_csi_driver_irsa" { + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + version = "~> 5.20" + + role_name_prefix = "${module.eks.cluster_name}-ebs-csi-driver-" + + attach_ebs_csi_policy = true + + oidc_providers = { + main = { + provider_arn = module.eks.oidc_provider_arn + namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"] + } + } + + tags = local.tags +} +#--------------------------------------------------------------- +# GP2 to GP3 default storage class and config for Redpanda +#--------------------------------------------------------------- +resource "kubernetes_annotations" "gp2_default" { + annotations = { + "storageclass.kubernetes.io/is-default-class" : "false" + } + api_version = "storage.k8s.io/v1" + kind = "StorageClass" + metadata { + name = "gp2" + } + force = true + + depends_on = [module.eks] +} + +resource "kubernetes_storage_class" "ebs_csi_encrypted_gp3_storage_class" { + metadata { + name = "gp3" + annotations = { + "storageclass.kubernetes.io/is-default-class" : "true" + } + } + + storage_provisioner = "ebs.csi.aws.com" + reclaim_policy = "Retain" + allow_volume_expansion = true + volume_binding_mode = "WaitForFirstConsumer" + parameters = { + fsType = "xfs" + type = "gp3" + } + + depends_on = [kubernetes_annotations.gp2_default] +} + +#--------------------------------------- +# Redpanda Config +#--------------------------------------- +data "aws_secretsmanager_secret_version" "redpanada_password_version" { + secret_id = aws_secretsmanager_secret.redpanada_password.id + depends_on = [aws_secretsmanager_secret_version.redpanada_password_version] +} + +resource "random_password" "redpanada_password" { + length = 16 + special = false +} +resource "aws_secretsmanager_secret" "redpanada_password" { + name = "redpanda_password-1234" + recovery_window_in_days = 0 +} +resource "aws_secretsmanager_secret_version" "redpanada_password_version" { + secret_id = aws_secretsmanager_secret.redpanada_password.id + secret_string = random_password.redpanada_password.result +} + +#--------------------------------------------------------------- +# Grafana Admin credentials resources +#--------------------------------------------------------------- + +data "aws_secretsmanager_secret_version" "grafana_password_version" { + secret_id = aws_secretsmanager_secret.redpanada_password.id + depends_on = [aws_secretsmanager_secret_version.grafana_password_version] +} + + +resource "random_string" "random_suffix" { + length = 10 + special = false + upper = false +} + +resource "random_password" "grafana" { + length = 16 + special = true + override_special = "!#$%&*()-_=+[]{}<>:?" +} + +#tfsec:ignore:aws-ssm-secret-use-customer-key +resource "aws_secretsmanager_secret" "grafana" { + name = "grafana-${random_string.random_suffix.result}" + recovery_window_in_days = 0 # Set to zero for this example to force delete during Terraform destroy +} + +resource "aws_secretsmanager_secret_version" "grafana_password_version" { + secret_id = aws_secretsmanager_secret.grafana.id + secret_string = random_password.grafana.result +} + + + +#--------------------------------------------------------------- +# EKS Blueprints Kubernetes Addons +#--------------------------------------------------------------- +module "eks_blueprints_addons" { + source = "aws-ia/eks-blueprints-addons/aws" + version = "~> 1.2" + + cluster_name = module.eks.cluster_name + cluster_endpoint = module.eks.cluster_endpoint + cluster_version = module.eks.cluster_version + oidc_provider_arn = module.eks.oidc_provider_arn + + #--------------------------------------- + # Amazon EKS Managed Add-ons + #--------------------------------------- + eks_addons = { + aws-ebs-csi-driver = { + most_recent = true + service_account_role_arn = module.ebs_csi_driver_irsa.iam_role_arn + } + coredns = { + most_recent = true + } + vpc-cni = { + most_recent = true + } + kube-proxy = { + most_recent = true + } + } + enable_cluster_autoscaler = true + enable_metrics_server = true + enable_aws_cloudwatch_metrics = true + enable_cert_manager = true + + + + #--------------------------------------- + # FluentBit Config for EKS Cluster + #--------------------------------------- + enable_aws_for_fluentbit = true + aws_for_fluentbit = { + enable_containerinsights = true + kubelet_monitoring = true + set = [{ + name = "cloudWatchLogs.autoCreateGroup" + value = true + }, + { + name = "hostNetwork" + value = true + }, + { + name = "dnsPolicy" + value = "ClusterFirstWithHostNet" + } + ] + + tags = local.tags + + } + + + #--------------------------------------- + # Prommetheus and Grafana stack + #--------------------------------------- + #--------------------------------------------------------------- + # Install Kafka Montoring Stack with Prometheus and Grafana + # 1- Grafana port-forward `kubectl port-forward svc/kube-prometheus-stack-grafana 8080:80 -n kube-prometheus-stack` + # 2- Grafana Admin user: admin + # 3- Get admin user password: `aws secretsmanager get-secret-value --secret-id --region $AWS_REGION --query "SecretString" --output text` + #--------------------------------------------------------------- + enable_kube_prometheus_stack = true + kube_prometheus_stack = { + values = [ + var.enable_amazon_prometheus ? templatefile("${path.module}/helm-values/kube-prometheus-amp-enable.yaml", { + region = local.region + amp_sa = local.amp_ingest_service_account + amp_irsa = module.amp_ingest_irsa[0].iam_role_arn + amp_remotewrite_url = "https://aps-workspaces.${local.region}.amazonaws.com/workspaces/${aws_prometheus_workspace.amp[0].id}/api/v1/remote_write" + amp_url = "https://aps-workspaces.${local.region}.amazonaws.com/workspaces/${aws_prometheus_workspace.amp[0].id}" + storage_class_type = kubernetes_storage_class.ebs_csi_encrypted_gp3_storage_class.id + }) : templatefile("${path.module}/helm-values/kube-prometheus.yaml", {}) + ] + chart_version = "48.1.1" + set_sensitive = [ + { + name = "grafana.adminPassword" + value = data.aws_secretsmanager_secret_version.grafana_password_version.secret_string + } + ], + } + + tags = local.tags +} +#--------------------------------------- +## Redpanda Helm Config +#--------------------------------------- +resource "helm_release" "redpanda" { + name = "redpanda" + repository = "https://charts.redpanda.com" + chart = "redpanda" + version = "5.7.22" + namespace = "redpanda" + create_namespace = true + + values = [ + templatefile("${path.module}/helm-values/redpanda-values.yaml", { + redpanda_username = var.redpanda_username, + redpanda_password = data.aws_secretsmanager_secret_version.redpanada_password_version.secret_string, + redpanda_domain = var.redpanda_domain, + storage_class = "gp3" + }) + ] + #timeout = "3600" + depends_on = [module.eks_blueprints_addons] +} diff --git a/streaming/redpanda/amp.tf b/streaming/redpanda/amp.tf new file mode 100644 index 000000000..96df2a495 --- /dev/null +++ b/streaming/redpanda/amp.tf @@ -0,0 +1,137 @@ +#IAM Policy for Amazon Prometheus & Grafana +resource "aws_iam_policy" "grafana" { + count = var.enable_amazon_prometheus ? 1 : 0 + + description = "IAM policy for Grafana Pod" + name_prefix = format("%s-%s-", local.name, "grafana") + path = "/" + policy = data.aws_iam_policy_document.grafana[0].json +} + +data "aws_iam_policy_document" "grafana" { + count = var.enable_amazon_prometheus ? 1 : 0 + + statement { + sid = "AllowReadingMetricsFromCloudWatch" + effect = "Allow" + resources = ["*"] + + actions = [ + "cloudwatch:DescribeAlarmsForMetric", + "cloudwatch:ListMetrics", + "cloudwatch:GetMetricData", + "cloudwatch:GetMetricStatistics" + ] + } + + statement { + sid = "AllowGetInsightsCloudWatch" + effect = "Allow" + resources = ["arn:${local.partition}:cloudwatch:${local.region}:${local.account_id}:insight-rule/*"] + + actions = [ + "cloudwatch:GetInsightRuleReport", + ] + } + + statement { + sid = "AllowReadingAlarmHistoryFromCloudWatch" + effect = "Allow" + resources = ["arn:${local.partition}:cloudwatch:${local.region}:${local.account_id}:alarm:*"] + + actions = [ + "cloudwatch:DescribeAlarmHistory", + "cloudwatch:DescribeAlarms", + ] + } + + statement { + sid = "AllowReadingLogsFromCloudWatch" + effect = "Allow" + resources = ["arn:${local.partition}:logs:${local.region}:${local.account_id}:log-group:*:log-stream:*"] + + actions = [ + "logs:DescribeLogGroups", + "logs:GetLogGroupFields", + "logs:StartQuery", + "logs:StopQuery", + "logs:GetQueryResults", + "logs:GetLogEvents", + ] + } + + statement { + sid = "AllowReadingTagsInstancesRegionsFromEC2" + effect = "Allow" + resources = ["*"] + + actions = [ + "ec2:DescribeTags", + "ec2:DescribeInstances", + "ec2:DescribeRegions", + ] + } + + statement { + sid = "AllowReadingResourcesForTags" + effect = "Allow" + resources = ["*"] + actions = ["tag:GetResources"] + } + + statement { + sid = "AllowListApsWorkspaces" + effect = "Allow" + resources = [ + "arn:${local.partition}:aps:${local.region}:${local.account_id}:/*", + "arn:${local.partition}:aps:${local.region}:${local.account_id}:workspace/*", + "arn:${local.partition}:aps:${local.region}:${local.account_id}:workspace/*/*", + ] + actions = [ + "aps:ListWorkspaces", + "aps:DescribeWorkspace", + "aps:GetMetricMetadata", + "aps:GetSeries", + "aps:QueryMetrics", + "aps:RemoteWrite", + "aps:GetLabels" + ] + } +} + +#------------------------------------------ +# Amazon Prometheus +#------------------------------------------ +locals { + amp_ingest_service_account = "amp-iamproxy-ingest-service-account" + amp_namespace = "kube-prometheus-stack" +} + +resource "aws_prometheus_workspace" "amp" { + count = var.enable_amazon_prometheus ? 1 : 0 + + alias = format("%s-%s", "amp-ws", local.name) + tags = local.tags +} + +module "amp_ingest_irsa" { + count = var.enable_amazon_prometheus ? 1 : 0 + + source = "aws-ia/eks-blueprints-addon/aws" + version = "~> 1.0" + create_release = false + create_role = true + create_policy = false + role_name = format("%s-%s", local.name, "amp-ingest") + role_policies = { amp_policy = aws_iam_policy.grafana[0].arn } + + oidc_providers = { + this = { + provider_arn = module.eks.oidc_provider_arn + namespace = local.amp_namespace + service_account = local.amp_ingest_service_account + } + } + + tags = local.tags +} diff --git a/streaming/redpanda/cleanup.sh b/streaming/redpanda/cleanup.sh new file mode 100644 index 000000000..3d6e650df --- /dev/null +++ b/streaming/redpanda/cleanup.sh @@ -0,0 +1,34 @@ +#!/bin/bash +set -o errexit +set -o pipefail + +read -p "Enter the region: " region +export AWS_DEFAULT_REGION=$region + +targets=( + "module.eks_blueprints_addons" + "module.ebs_csi_driver_irsa" + "module.eks" + "module.vpc" +) + +for target in "${targets[@]}" +do + terraform destroy -target="$target" -auto-approve + destroy_output=$(terraform destroy -target="$target" -auto-approve 2>&1) + if [[ $? -eq 0 && $destroy_output == *"Destroy complete!"* ]]; then + echo "SUCCESS: Terraform destroy of $target completed successfully" + else + echo "FAILED: Terraform destroy of $target failed" + exit 1 + fi +done + +terraform destroy -auto-approve +destroy_output=$(terraform destroy -auto-approve 2>&1) +if [[ $? -eq 0 && $destroy_output == *"Destroy complete!"* ]]; then + echo "SUCCESS: Terraform destroy of all targets completed successfully" +else + echo "FAILED: Terraform destroy of all targets failed" + exit 1 +fi diff --git a/streaming/redpanda/helm-values/kube-prometheus-amp-enable.yaml b/streaming/redpanda/helm-values/kube-prometheus-amp-enable.yaml new file mode 100644 index 000000000..20b4a75d7 --- /dev/null +++ b/streaming/redpanda/helm-values/kube-prometheus-amp-enable.yaml @@ -0,0 +1,52 @@ +prometheus: + serviceAccount: + create: true + name: ${amp_sa} + annotations: + eks.amazonaws.com/role-arn: ${amp_irsa} + prometheusSpec: + remoteWrite: + - url: ${amp_remotewrite_url} + sigv4: + region: ${region} + queueConfig: + maxSamplesPerSend: 1000 + maxShards: 200 + capacity: 2500 + retention: 5h + scrapeInterval: 30s + evaluationInterval: 30s + scrapeTimeout: 10s + storageSpec: + volumeClaimTemplate: + metadata: + name: data + spec: + storageClassName: ${storage_class_type} + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Gi +alertmanager: + enabled: false + +grafana: + enabled: true + defaultDashboardsEnabled: true +# Adding AMP datasource to Grafana config + serviceAccount: + create: false + name: ${amp_sa} + grafana.ini: + auth: + sigv4_auth_enabled: true + additionalDataSources: + - name: AMP + editable: true + jsonData: + sigV4Auth: true + sigV4Region: ${region} + type: prometheus + isDefault: false + url: ${amp_url} diff --git a/streaming/redpanda/helm-values/kube-prometheus.yaml b/streaming/redpanda/helm-values/kube-prometheus.yaml new file mode 100644 index 000000000..498fb2824 --- /dev/null +++ b/streaming/redpanda/helm-values/kube-prometheus.yaml @@ -0,0 +1,23 @@ +prometheus: + prometheusSpec: + retention: 5h + scrapeInterval: 30s + evaluationInterval: 30s + scrapeTimeout: 10s + storageSpec: + volumeClaimTemplate: + metadata: + name: data + spec: + storageClassName: ${storage_class_type} + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 50Gi +alertmanager: + enabled: false + +grafana: + enabled: true + defaultDashboardsEnabled: true diff --git a/streaming/redpanda/helm-values/redpanda-values.yaml b/streaming/redpanda/helm-values/redpanda-values.yaml new file mode 100644 index 000000000..00be421c5 --- /dev/null +++ b/streaming/redpanda/helm-values/redpanda-values.yaml @@ -0,0 +1,19 @@ +auth: + sasl: + enabled: true + secretRef: redpanda-superusers + users: + - name: ${redpanda_username} + password: "${redpanda_password}" + +storage: + persistentVolume: + enabled: true + size: 20Gi + storageClass: ${storage_class} + labels: {} + annotations: {} + + +external: + domain: ${redpanda_domain} diff --git a/streaming/redpanda/install.sh b/streaming/redpanda/install.sh new file mode 100644 index 000000000..83c76e620 --- /dev/null +++ b/streaming/redpanda/install.sh @@ -0,0 +1,35 @@ +#!/bin/bash + +read -p "Enter the region: " region +export AWS_DEFAULT_REGION=$region + +# List of Terraform modules to apply in sequence +targets=( + "module.vpc" + "module.eks" + "module.ebs_csi_driver_irsa" + "module.eks_bluepints_addons" +) + +# Apply modules in sequence +for target in "${targets[@]}" +do + echo "Applying module $target..." + apply_output=$(terraform apply -target="$target" -auto-approve 2>&1 | tee /dev/tty) + if [[ ${PIPESTATUS[0]} -eq 0 && $apply_output == *"Apply complete"* ]]; then + echo "SUCCESS: Terraform apply of $target completed successfully" + else + echo "FAILED: Terraform apply of $target failed" + exit 1 + fi +done + +# Final apply to catch any remaining resources +echo "Applying remaining resources..." +apply_output=$(terraform apply -auto-approve 2>&1 | tee /dev/tty) +if [[ ${PIPESTATUS[0]} -eq 0 && $apply_output == *"Apply complete"* ]]; then + echo "SUCCESS: Terraform apply of all modules completed successfully" +else + echo "FAILED: Terraform apply of all modules failed" + exit 1 +fi diff --git a/streaming/redpanda/main.tf b/streaming/redpanda/main.tf new file mode 100644 index 000000000..9cbe8058d --- /dev/null +++ b/streaming/redpanda/main.tf @@ -0,0 +1,86 @@ +################################################################################ +# Data +################################################################################ + +data "aws_availability_zones" "available" {} +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + + +################################################################################ +# Local Variables +################################################################################ +locals { + name = var.name + region = var.region + account_id = data.aws_caller_identity.current.account_id + partition = data.aws_partition.current.partition + + vpc_cidr = var.vpc_cidr + azs = slice(data.aws_availability_zones.available.names, 0, 2) + + + tags = { + + } + +} + + +################################################################################ +# Cluster and Managed Node Group +################################################################################ +module "eks" { + source = "terraform-aws-modules/eks/aws" + version = "~>19.15" + + + cluster_name = local.name + cluster_version = var.eks_cluster_version + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + cluster_endpoint_public_access = true + cluster_endpoint_private_access = true + + node_security_group_additional_rules = { + ingress_self_all = { + description = "Node to node all ports/protocols" + protocol = "-1" + from_port = 0 + to_port = 0 + type = "ingress" + self = true + } + egress_all = { + description = "Node all egress" + protocol = "-1" + from_port = 0 + to_port = 0 + type = "egress" + cidr_blocks = ["0.0.0.0/0"] + ipv6_cidr_blocks = ["::/0"] + } + } + #--------------------------------------------------------------- + # Managed Node Group - Core Services + #--------------------------------------------------------------- + + eks_managed_node_groups = { + core_node_group = { + name = "core-mng-01" + description = "Core EKS managed node group" + instance_types = ["m5.xlarge"] + min_size = 3 + max_size = 6 + desired_size = 3 + + #--------------------------------------------------------------- + # Managed Node Group - Redpanda + #--------------------------------------------------------------- + + } + + } + +} diff --git a/streaming/redpanda/outputs.tf b/streaming/redpanda/outputs.tf new file mode 100644 index 000000000..eb153384e --- /dev/null +++ b/streaming/redpanda/outputs.tf @@ -0,0 +1,43 @@ +################################################################################ +# Cluster +################################################################################ +output "cluster_arn" { + description = "The Amazon Resource Name (ARN) of the cluster" + value = module.eks.cluster_arn +} + +output "cluster_name" { + description = "The Amazon Resource Name (ARN) of the cluster" + value = module.eks.cluster_id +} + +output "oidc_provider" { + description = "The OIDC Provider" + value = module.eks.cluster_oidc_issuer_url +} + +output "oidc_provider_arn" { + description = "The ARN of the OIDC Provider" + value = module.eks.oidc_provider_arn +} + +################################################################################ +# VPC Info +################################################################################ +output "vpc_id" { + description = "Map of attribute maps for all EKS managed node groups created" + value = module.vpc.vpc_id +} +output "vpc_subnets" { + description = "Map of attribute maps for all EKS managed node groups created" + value = module.vpc.public_subnets +} +################################################################################ +# Kubernetes Config +################################################################################ + +output "configure_kubectl" { + description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" + value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}" +} + diff --git a/streaming/redpanda/providers.tf b/streaming/redpanda/providers.tf new file mode 100644 index 000000000..67f03a4d6 --- /dev/null +++ b/streaming/redpanda/providers.tf @@ -0,0 +1,27 @@ +provider "aws" { + region = local.region +} + +provider "kubernetes" { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + #token = data.aws_eks_cluster_auth.this.token + exec { + api_version = "client.authentication.k8s.io/v1beta1" + args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] + command = "aws" + } +} + +provider "helm" { + kubernetes { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + #token = data.aws_eks_cluster_auth.this.token + exec { + api_version = "client.authentication.k8s.io/v1beta1" + args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name] + command = "aws" + } + } +} diff --git a/streaming/redpanda/redpanda.png b/streaming/redpanda/redpanda.png new file mode 100644 index 000000000..fb9ed501a Binary files /dev/null and b/streaming/redpanda/redpanda.png differ diff --git a/streaming/redpanda/variables.tf b/streaming/redpanda/variables.tf new file mode 100644 index 000000000..538bc902a --- /dev/null +++ b/streaming/redpanda/variables.tf @@ -0,0 +1,53 @@ +variable "name" { + description = "Name for Resoucres created - dokeks-redpanda" + default = "doeks-redpanda" + type = string +} +variable "region" { + description = "Default Region" + default = "us-west-2" + type = string +} +variable "eks_cluster_version" { + description = "EKS Cluster Version" + default = "1.28" + type = string +} +variable "vpc_cidr" { + description = "VPC CIDR" + default = "172.16.0.0/16" + type = string +} +variable "public_subnets" { + description = "Public Subnets with 126 IPs" + default = ["172.16.255.0/25", "172.16.255.128/25"] + type = list(string) +} + +variable "private_subnets" { + description = "Private Subnets with 510 IPs" + default = ["172.16.0.0/23", "172.16.2.0/23"] + type = list(string) +} +#--------------------------------------- +# Prometheus Enable +#--------------------------------------- +variable "enable_amazon_prometheus" { + description = "Enable AWS Managed Prometheus service" + type = bool + default = true +} + +#--------------------------------------- +# Redpanda Config +#--------------------------------------- +variable "redpanda_username" { + default = "superuser" + description = "Default Super Username for Redpanda deployment" + type = string +} +variable "redpanda_domain" { + default = "customredpandadomain.local" + description = "Repanda Custom Domain" + type = string +} diff --git a/streaming/redpanda/versions.tf b/streaming/redpanda/versions.tf new file mode 100644 index 000000000..3de7838a6 --- /dev/null +++ b/streaming/redpanda/versions.tf @@ -0,0 +1,23 @@ +terraform { + required_version = ">= 1.0.0" + + required_providers { + aws = { + source = "hashicorp/aws" + version = ">= 3.72" + } + kubernetes = { + source = "hashicorp/kubernetes" + version = ">= 2.10" + } + helm = { + source = "hashicorp/helm" + version = ">= 2.4.1" + } + random = { + source = "hashicorp/random" + version = ">= 3.5.0" + } + } + +} diff --git a/streaming/redpanda/vpc.tf b/streaming/redpanda/vpc.tf new file mode 100644 index 000000000..3fe64112e --- /dev/null +++ b/streaming/redpanda/vpc.tf @@ -0,0 +1,31 @@ +################################################################################ +# VPC +################################################################################ + +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = ">= 4.0.0" + + name = local.name + cidr = local.vpc_cidr + azs = local.azs + public_subnets = var.public_subnets + private_subnets = var.private_subnets + + enable_nat_gateway = true + single_nat_gateway = true + + public_subnet_tags = { + #Tags for External ELB + "kubernetes.io/role/elb" = 1 + } + + private_subnet_tags = { + # Tags for Internal ELB + "kubernetes.io/role/internal-elb" = 1 + # Tags subnets for Karpenter auto-discovery + "karpenter.sh/discovery" = local.name + } + + tags = local.tags +}