Skip to content

Commit

Permalink
upgrade to large nodes (#25)
Browse files Browse the repository at this point in the history
* remove node selectors - to allow switching node pools
* added dry run before upgrade, update deployment docs, allow to run more charts on test environment
  • Loading branch information
OriHoch authored Feb 27, 2018
1 parent 9404e2f commit 5331b32
Show file tree
Hide file tree
Showing 24 changed files with 171 additions and 52 deletions.
12 changes: 7 additions & 5 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,13 @@ script:
if ./run_docker_ops.sh "${K8S_ENVIRONMENT_NAME}" "
RES=0;
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh && chmod 700 get_helm.sh && ./get_helm.sh;
! ./helm_upgrade.sh && echo 'failed helm upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh spark && echo 'failed spark upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh volunteers && echo 'failed volunteers upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh bi && echo 'failed bi upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh profiles && echo 'failed profiles upgrade' && RES=1;
if ./helm_upgrade_all.sh --install --dry-run --debug; then
echo "Dry run was successfull, performing upgrades"
! ./helm_upgrade_all.sh --install && echo "Failed upgrade" && RES=1
else
echo "Failed dry run"
RES=1
fi
sleep 2;
kubectl get pods --all-namespaces;
kubectl get service --all-namespaces;
Expand Down
26 changes: 19 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,26 +71,38 @@ kubectl create -f rbac-config.yaml
helm init --service-account tiller --upgrade --force-upgrade --history-max 1
```

Deploy:
To do a full installation of all charts (after a dry run):

```
./helm_upgrade.sh
./helm_upgrade_all.sh --install --dry-run && ./helm_upgrade_all.sh --install
```

When helm upgrade command completes successfully it doesn't necesarily mean deployment is complete (although it often does) - it only updates the desired state.
If you are working on a specific chart you can deploy only that chart:

Kubernetes / Helm have a desired state of the infrastructure and they will do their best to move to that state.
```
./helm_upgrade_external_chart.sh CHART_NAME
```

You can add arguments to `./helm_upgrade.sh` which are forwarded to the underlying `helm upgrade` command.
The root infrastructure chart is deployed by:

Check [the Helm documentation](https://docs.helm.sh/) for more details.
```
./helm_upgrade.sh
```

All the helm upgrade commands use the underlying `helm upgrade` command

Some useful arguments:
All additional arguments are forwarded, some useful arguments:

* For initial installation you should add `--install`
* Depending on the changes you might need to add `--recreate-pods` or `--force`
* For debugging you can also use `--debug` and `--dry-run`

When helm upgrade command completes successfully it doesn't necesarily mean deployment is complete (although it often does) - it only updates the desired state.

Kubernetes / Helm have a desired state of the infrastructure and they will do their best to move to that state.

Check [the Helm documentation](https://docs.helm.sh/) for more details.

Additionally, you can to use `force_update.sh` to force an update on a specific deployment.

If deployment fails, you might need to forcefully reinstall tiller, you can delete the deployment and reinstall
Expand Down
2 changes: 0 additions & 2 deletions charts-external/dreams/templates/dreams.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ spec:
labels:
app: dreams
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: dreams
image: {{ .Values.image | quote }}
Expand Down
2 changes: 0 additions & 2 deletions charts-external/dreams/templates/dreamsdb.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ spec:
labels:
app: dreamsdb
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: dreamsdb
image: "postgres"
Expand Down
2 changes: 0 additions & 2 deletions charts-external/spark/templates/spark-drupal-sync.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,6 @@ spec:
labels:
app: spark-drupal-sync
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: spark
image: {{ .Values.image | default "orihoch/spark:modernize-dockerize" | quote }}
Expand Down
4 changes: 2 additions & 2 deletions charts-external/spark/templates/spark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ spec:
labels:
app: spark
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: migrations
image: {{ .Values.image | default "orihoch/spark:modernize-dockerize" | quote }}
Expand Down Expand Up @@ -99,6 +97,7 @@ spec:
{{ if .Values.drupalSecretName }}
- {"name": "DRUPAL_PROFILE_API_PASSWORD", "valueFrom": {"secretKeyRef": {"name": {{ .Values.drupalSecretName | quote }}, "key": "DRUPAL_PROFILE_API_PASSWORD"}}}
{{ end }}
{{ if .Values.enableSecrets }}
- name: SPARK_SECRET_TOKEN
valueFrom:
secretKeyRef:
Expand All @@ -111,6 +110,7 @@ spec:
key: SLACK_DEPLOY_WEBHOOK
- {"name":"AWS_ACCESS_KEY_ID", "valueFrom": {"secretKeyRef": {"name": "spark-camp-files", "key": "AWS_ACCESS_KEY"}}}
- {"name": "AWS_SECRET_ACCESS_KEY", "valueFrom": {"secretKeyRef": {"name": "spark-camp-files", "key": "AWS_SECRET_KEY"}}}
{{ end }}
readinessProbe:
exec:
command:
Expand Down
2 changes: 0 additions & 2 deletions charts-external/spark/templates/sparkdb.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ spec:
labels:
app: sparkdb
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: sparkdb
image: mariadb:10.0
Expand Down
2 changes: 0 additions & 2 deletions charts-external/volunteers/templates/mongo-express.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,6 @@ spec:
labels:
app: volunteersmongoexpress
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: mongoexpress
image: mongo-express
Expand Down
4 changes: 2 additions & 2 deletions charts-external/volunteers/templates/volunteers.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ spec:
labels:
app: volunteers
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: volunteers
image: {{ .Values.image | quote }}
Expand All @@ -40,11 +38,13 @@ spec:
value: mongodb://volunteersdb/volunteers
- name: SPARK_HOST
value: {{ .Values.SPARK_HOST | quote }}
{{ if .Values.enableSecrets }}
- name: SECRET
valueFrom:
secretKeyRef:
name: spark-secret-token
key: SPARK_SECRET_TOKEN
{{ end }}
- name: JWT_KEY
value: authToken
{{ end }}
2 changes: 0 additions & 2 deletions charts-external/volunteers/templates/volunteersdb.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ spec:
labels:
app: volunteersdb
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: volunteersdb
image: mongo
Expand Down
16 changes: 6 additions & 10 deletions environments/ori/values.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# minimal values files for testing - enable just what you need and copy required values from staging environment

global:
defaultNodePool: default-pool
k8sOpsSecretName: ops
k8sOpsImage: gcr.io/uumpa123/midburn-k8s
environmentName: ori
Expand All @@ -15,28 +14,25 @@ global:
persistentStorageIP: 10.128.0.7

traefik:
enabled: false
enabled: true
enableLoadBalancer: false
profilesHostsRule: ""

spark:
enabled: true
enableDeployment: false
# see Shared / Persistent Storage section in the README to prepare the storage / import existing data
persistentStorageName: sparkdb
enableDeployment: true
enableSecrets: false

nginx:
enabled: true
htpasswdSecretName: nginx-htpasswd

volunteers:
enabled: false
enabled: true
enableSecrets: false

profiles:
enabled: false
# dbImportUrl: gs://midburn-k8s-backups/profiles-db-production-dump-2018-01-16-11-30.sql
# NO trailing slash!
# drupalBaseUrl: http://localhost
enabled: true

bi:
enabled: true
3 changes: 1 addition & 2 deletions environments/production/values.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
global:
defaultNodePool: default-pool
k8sOpsSecretName: ops
k8sOpsImage: gcr.io/uumpa123/midburn-k8s
environmentName: production
Expand Down Expand Up @@ -100,7 +99,7 @@ profiles:

bi:
enabled: true
persistentStorageName:
persistentStorageName: metabase

dreams:
enabled: false
Expand Down
1 change: 0 additions & 1 deletion environments/staging/values.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
global:
defaultNodePool: default-pool
k8sOpsSecretName: ops
k8sOpsImage: gcr.io/uumpa123/midburn-k8s
environmentName: staging
Expand Down
68 changes: 68 additions & 0 deletions helm_healthcheck.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
#!/usr/bin/env bash

source connect.sh

RES=0

echo "Performing health checks for all charts of ${K8S_ENVIRONMENT_NAME} environment"

root_healthcheck() {
! [ "`./read_env_yaml.sh global enableRootChart`" == "true" ] \
&& echo "root chart is disabled, skipping healthcheck" && return 0
kubectl rollout status deployment/adminer --watch=false &&\
kubectl rollout status deployment/nginx --watch=false &&\
kubectl rollout status deployment/traefik --watch=false
}

spark_healthcheck() {
! [ "`./read_env_yaml.sh spark enabled`" == "true" ] \
&& echo "spark is disabled, skipping healthcheck" && return 0
kubectl rollout status deployment/spark --watch=false &&\
kubectl rollout status deployment/sparkdb --watch=false
}

volunteers_healthcheck() {
! [ "`./read_env_yaml.sh volunteers enabled`" == "true" ] \
&& echo "volunteers is disabled, skipping healthcheck" && return 0
kubectl rollout status deployment/volunteers --watch=false &&\
kubectl rollout status deployment/volunteersdb --watch=false

}

bi_healthcheck() {
! [ "`./read_env_yaml.sh bi enabled`" == "true" ] \
&& echo "bi is disabled, skipping healthcheck" && return 0
kubectl rollout status deployment/metabase --watch=false

}

profiles_healthcheck() {
! [ "`./read_env_yaml.sh profiles enabled`" == "true" ] \
&& echo "profiles is disabled, skipping healthcheck" && return 0
kubectl rollout status deployment/profiles-drupal --watch=false &&\
kubectl rollout status deployment/profiles-db --watch=false
}

chatops_healthcheck() {
! [ "`./read_env_yaml.sh chatops enabled`" == "true" ] \
&& echo "chatops is disabled, skipping healthcheck" && return 0
kubectl rollout status deployment/chatops --watch=false
}

dreams_healthcheck() {
! [ "`./read_env_yaml.sh dreams enabled`" == "true" ] \
&& echo "dreams is disabled, skipping healthcheck" && return 0
kubectl rollout status deployment/dreams --watch=false
}

! root_healthcheck && echo failed root healthcheck && RES=1;
! spark_healthcheck && echo failed spark healthcheck && RES=1;
! volunteers_healthcheck && echo failed volunteers healthcheck && RES=1;
! bi_healthcheck && echo failed bi healthcheck && RES=1;
! profiles_healthcheck && echo failed profiles healthcheck && RES=1;
! chatops_healthcheck && echo failed chatops healthcheck && RES=1;
! dreams_healthcheck && echo failed dreams healthcheck && RES=1;

[ "${RES}" == "0" ] && echo Great Success!

exit $RES
19 changes: 19 additions & 0 deletions helm_remove_all.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/usr/bin/env bash

source connect.sh

RES=0

echo "Removing all charts of ${K8S_ENVIRONMENT_NAME} environment"

[ "${1}" != "--approve" ] && read -p 'Press <Enter> to continue...'

! helm delete --purge "${K8S_HELM_RELEASE_NAME}-${K8S_ENVIRONMENT_NAME}" && RES=1
! helm delete --purge "${K8S_HELM_RELEASE_NAME}-spark-${K8S_ENVIRONMENT_NAME}" && RES=1
! helm delete --purge "${K8S_HELM_RELEASE_NAME}-volunteers-${K8S_ENVIRONMENT_NAME}" && RES=1
! helm delete --purge "${K8S_HELM_RELEASE_NAME}-bi-${K8S_ENVIRONMENT_NAME}" && RES=1
! helm delete --purge "${K8S_HELM_RELEASE_NAME}-profiles-${K8S_ENVIRONMENT_NAME}" && RES=1
! helm delete --purge "${K8S_HELM_RELEASE_NAME}-chatops-${K8S_ENVIRONMENT_NAME}" && RES=1
! helm delete --purge "${K8S_HELM_RELEASE_NAME}-dreams-${K8S_ENVIRONMENT_NAME}" && RES=1

exit $RES
17 changes: 17 additions & 0 deletions helm_upgrade_all.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/usr/bin/env bash

source connect.sh

RES=0

echo "Upgrading all charts of ${K8S_ENVIRONMENT_NAME} environment"

! ./helm_upgrade.sh "$@" && echo 'failed helm upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh spark "$@" && echo 'failed spark upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh volunteers "$@" && echo 'failed volunteers upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh bi "$@" && echo 'failed bi upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh profiles "$@" && echo 'failed profiles upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh chatops "$@" && echo 'failed chatops upgrade' && RES=1;
! ./helm_upgrade_external_chart.sh dreams "$@" && echo 'failed dreams upgrade' && RES=1;

exit $RES
2 changes: 1 addition & 1 deletion helm_upgrade_external_chart.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ done

VALUES=`cat "${TEMPDIR}/values.yaml"`

if [ `./read_yaml.py "${TEMPDIR}/values.yaml" enabled` == "true" ]; then
if [ "`./read_yaml.py "${TEMPDIR}/values.yaml" enabled 2>/dev/null`" == "true" ]; then
CMD="helm upgrade -f ${TEMPDIR}/values.yaml ${RELEASE_NAME} ${CHART_DIRECTORY} ${@:2}"
if ! $CMD; then
echo
Expand Down
23 changes: 23 additions & 0 deletions read_env_yaml.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
#!/usr/bin/env bash

source connect.sh >/dev/null

ENV_AUTO_VALUE="`./read_yaml.py environments/${K8S_ENVIRONMENT_NAME}/values.auto-updated.yaml "$@" 2>/dev/null`"
if [ "${ENV_AUTO_VALUE}" != "" ] && [ "${ENV_AUTO_VALUE}" != "{}" ]; then
echo "${ENV_AUTO_VALUE}"
exit 0
fi

ENV_MAN_VALUE="`./read_yaml.py environments/${K8S_ENVIRONMENT_NAME}/values.yaml "$@" 2>/dev/null`"
if [ "${ENV_MAN_VALUE}" != "" ] && [ "${ENV_MAN_VALUE}" != "{}" ]; then
echo "${ENV_MAN_VALUE}"
exit 0
fi

ROOT_VALUE="`./read_yaml.py ./values.yaml "$@" 2>/dev/null`"
if [ "${ROOT_VALUE}" != "" ] && [ "${ROOT_VALUE}" != "{}" ]; then
echo "${ROOT_VALUE}"
exit 0
fi

exit 1
2 changes: 2 additions & 0 deletions read_yaml.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@
values = yaml.load(f)

def get_from_dict(values, keys):
if len(keys) < 1:
return '{}'
if len(keys) > 1:
return get_from_dict(values[keys[0]], keys[1:])
else:
Expand Down
2 changes: 0 additions & 2 deletions templates/adminer.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,6 @@ spec:
labels:
app: adminer
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: adminer
image: adminer
Expand Down
2 changes: 0 additions & 2 deletions templates/nginx.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,6 @@ spec:
# update the pod on nginx-conf changes
checksum/config: {{ include (print $.Template.BasePath "/nginx-conf.yaml") . | sha256sum }}
spec:
nodeSelector:
cloud.google.com/gke-nodepool: {{ .Values.global.defaultNodePool | quote }}
containers:
- name: nginx
image: nginx:alpine
Expand Down
Loading

0 comments on commit 5331b32

Please sign in to comment.