Skip to content

AleksZimin/ozyab09_microservices

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

37 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ozyab09_microservices

ozyab09 microservices repository

Homework 25 (kubernetes-5)

Build Status

ΠŸΠΎΠ΄Π³ΠΎΡ‚ΠΎΠ²ΠΊΠ°

Π£ нас Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±Ρ‹Ρ‚ΡŒ Ρ€Π°Π·Π²Π΅Ρ€Π½ΡƒΡ‚ΡŒ кластСр k8s:

  • ΠΌΠΈΠ½ΠΈΠΌΡƒΠΌ 2 Π½ΠΎΠ΄Ρ‹ g1-small (1,5 Π“Π‘)
  • ΠΌΠΈΠ½ΠΈΠΌΡƒΠΌ 1 Π½ΠΎΠ΄Π° n1-standard-2 (7,5 Π“Π‘)

Π’ настройках:

  • Stackdriver Logging - ΠžΡ‚ΠΊΠ»ΡŽΡ‡Π΅Π½
  • Stackdriver Monitoring - ΠžΡ‚ΠΊΠ»ΡŽΡ‡Π΅Π½
  • Π£ΡΡ‚Π°Ρ€Π΅Π²ΡˆΠΈΠ΅ ΠΏΡ€Π°Π²Π° доступа - Π’ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΎ

Π’.ΠΊ. сСрвСр Π±Ρ‹Π» создан Π·Π°Π½ΠΎΠ²ΠΎ, ΠΏΠΎΠ²Ρ‚ΠΎΡ€Π½ΠΎ установим Tiller: $ kubectl apply -f kubectl apply -f kubernetes/reddit/tiller.yml

Запуск tiller-сСрвСра: $ helm init --service-account tiller

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΊΠ°: $ kubectl get pods -n kube-system --selector app=helm

Из Helm-Ρ‡Π°Ρ€Ρ‚Π° установим ingress-ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»Π»Π΅Ρ€ nginx: $ helm install stable/nginx-ingress --name nginx

НайдСм Π²Ρ‹Π΄Π°Π½Π½Ρ‹ΠΉ IP-адрСс:

$ kubectl get svc
NAME                                  TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
kubernetes                            ClusterIP      10.47.240.1    <none>          443/TCP                      6m
nginx-nginx-ingress-controller        LoadBalancer   10.47.255.40   35.197.107.27   80:30949/TCP,443:31982/TCP   1m
nginx-nginx-ingress-default-backend   ClusterIP      10.47.251.3    <none>          80/TCP                       1m

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π² /etc/hosts: # echo 35.197.107.27 reddit reddit-prometheus reddit-grafana reddit-non-prod production reddit-kibana staging prod > /etc/hosts

План

  • Π Π°Π·Π²Π΅Ρ€Ρ‚Ρ‹Π²Π°Π½ΠΈΠ΅ Prometheus Π² k8s
  • Настройка Prometheus ΠΈ Grafana для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ
  • Настройка EFK для сбора Π»ΠΎΠ³ΠΎΠ²

ΠœΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³

Π‘ΡƒΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ инструмСнты:

  • prometheus - сСрвСр сбора ΠΈ Π°Π»Π΅Ρ€Π³ΠΈΠ½Π³Π°
  • grafana - сСрвСр Π²ΠΈΠ·ΡƒΠ°Π»ΠΈΠ·Π°Ρ†ΠΈΠΈ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ
  • alertmanager - ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚ prometheus для Π°Π»Π΅Ρ€Ρ‚ΠΈΠ½Π³Π°
  • Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹Π΅ экспортСры для ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ prometheus

Prometheus ΠΎΡ‚Π»ΠΈΡ‡Π½ΠΎ ΠΏΠΎΠ΄Ρ…ΠΎΠ΄ΠΈΡ‚ для Ρ€Π°Π±ΠΎΡ‚Ρ‹ с ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°ΠΌΠΈ ΠΈ Π΄ΠΈΠ½Π°ΠΌΠΈΡ‡Π½Ρ‹ΠΌ Ρ€Π°Π·ΠΌΠ΅Ρ‰Π΅Π½ΠΈΠ΅ΠΌ сСрвисов.

Π‘Ρ…Π΅ΠΌΠ° Ρ€Π°Π±ΠΎΡ‚Ρ‹: Monitoring Pipeline

Установка Prometheus

Prometheus Π±ΡƒΠ΄Π΅ΠΌ ΡΡ‚Π°Π²ΠΈΡ‚ΡŒ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ Helm Ρ‡Π°Ρ€Ρ‚Π°. Π—Π°Π³Ρ€ΡƒΠ·ΠΈΠΌ prometheus локально Π² Charts ΠΊΠ°Ρ‚Π°Π»ΠΎΠ³:

$ cd kubernetes/charts
$ helm fetch β€”-untar stable/prometheus 

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π²Π½ΡƒΡ‚Ρ€ΠΈ Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΠΈ Ρ‡Π°Ρ€Ρ‚Π° Ρ„Π°ΠΉΠ» custom_values.yml.

ΠžΡΠ½ΠΎΠ²Π½Ρ‹Π΅ отличия ΠΎΡ‚ values.yml:

  • ΠΎΡ‚ΠΊΠ»ΡŽΡ‡Π΅Π½Π° Ρ‡Π°ΡΡ‚ΡŒ устанавливаСмых сСрвисов (pushgateway, alertmanager, kube-state-metrics)
  • Π²ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΎ созданиС Ingress’а для ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ Ρ‡Π΅Ρ€Π΅Π· nginx
  • ΠΏΠΎΠΏΡ€Π°Π²Π»Π΅Π½ endpoint для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ cadvisor
  • ΡƒΠΌΠ΅Π½ΡŒΡˆΠ΅Π½ ΠΈΠ½Ρ‚Π΅Ρ€Π²Π°Π» сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ (с 1 ΠΌΠΈΠ½ΡƒΡ‚Ρ‹ Π΄ΠΎ 30 сСкунд)

Запустим Prometheus Π² k8s:

$ cd kubernetes/charsts/prometheus
$ helm upgrade prom . -f custom_values.yml --install`

ΠŸΠ΅Ρ€Π΅ΠΉΠ΄Π΅ΠΌ ΠΏΠΎ адрСсу http://reddit-prometheus/ Π² Ρ€Π°Π·Π΄Π΅Π» Targets. reddit-prometheus

Targets

Π£ нас ΡƒΠΆΠ΅ присутствуСт ряд endpoint’ов для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ

  • ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ API-сСрвСра
  • ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ Π½ΠΎΠ΄ с cadvisor’ов
  • сам prometheus

Targets

ΠžΡ‚ΠΌΠ΅Ρ‚ΠΈΠΌ, Ρ‡Ρ‚ΠΎ ΠΌΠΎΠΆΠ½ΠΎ ΡΠΎΠ±ΠΈΡ€Π°Ρ‚ΡŒ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ cadvisor’а (ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ ΡƒΠΆΠ΅ являСтся Ρ‡Π°ΡΡ‚ΡŒΡŽ kubelet) Ρ‡Π΅Ρ€Π΅Π· ΠΏΡ€ΠΎΠΊΡΠΈΡ€ΡƒΡŽΡ‰ΠΈΠΉ запрос Π² kube-api-server.

Если Π·Π°ΠΉΡ‚ΠΈ ΠΏΠΎ ssh Π½Π° Π»ΡŽΠ±ΡƒΡŽ ΠΈΠ· машин кластСра ΠΈ Π·Π°ΠΏΡ€ΠΎΡΠΈΡ‚ΡŒ $ curl http://localhost:4194/metrics Ρ‚ΠΎ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΠΌ Ρ‚Π΅ ΠΆΠ΅ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ Ρƒ kubelet Π½Π°ΠΏΡ€ΡΠΌΡƒΡŽ

Но Π²Π°Ρ€ΠΈΠ°Π½Ρ‚ с kube-api ΠΏΡ€Π΅Π΄ΠΏΠΎΡ‡Ρ‚ΠΈΡ‚Π΅Π»ΡŒΠ½Π΅ΠΉ, Ρ‚.ΠΊ. этот Ρ‚Ρ€Π°Ρ„ΠΈΠΊ ΡˆΠΈΡ„Ρ€ΡƒΠ΅Ρ‚ΡΡ TLS ΠΈ Ρ‚Ρ€Π΅Π±ΡƒΠ΅Ρ‚ Π°ΡƒΡ‚Π΅Π½Ρ‚ΠΈΡ„ΠΈΠΊΠ°Ρ†ΠΈΠΈ.

Π’Π°Ρ€Π³Π΅Ρ‚Ρ‹ для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ Π½Π°ΠΉΠ΄Π΅Π½Ρ‹ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ service discovery (SD), настроСнного Π² ΠΊΠΎΠ½Ρ„ΠΈΠ³Π΅ prometheus (Π»Π΅ΠΆΠΈΡ‚ Π² custom-values.yml):

 prometheus.yml:
...
 - job_name: 'kubernetes-apiservers' # kubernetes-apiservers (1/1 up)
...
 - job_name: 'kubernetes-nodes' # kubernetes-apiservers (3/3 up)
 kubernetes_sd_configs:         # Настройки Service Discovery (для поиска target'ΠΎΠ²)
 - role: node
 scheme: https                  # Настройки ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ ΠΊ target’ам (для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ)
 tls_config:
 ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
 insecure_skip_verify: true
 bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 relabel_configs:             # Настройки Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹Ρ… ΠΌΠ΅Ρ‚ΠΎΠΊ, Ρ„ΠΈΠ»ΡŒΡ‚Ρ€Π°Ρ†ΠΈΡ Π½Π°ΠΉΠ΄Π΅Π½Π½Ρ‹Ρ… Ρ‚Π°Ρ€Π³Π΅Ρ‚ΠΎΠ², ΠΈΡ… ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠ΅

ИспользованиС SD Π² kubernetes позволяСт Π½Π°ΠΌ Π΄ΠΈΠ½Π°ΠΌΠΈΡ‡Π½ΠΎ ΠΌΠ΅Π½ΡΡ‚ΡŒ кластСр (ΠΊΠ°ΠΊ сами хосты, Ρ‚Π°ΠΊ ΠΈ сСрвисы ΠΈ прилоТСния) Π¦Π΅Π»ΠΈ для ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Π° Π½Π°Ρ…ΠΎΠ΄ΠΈΠΌ c ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ запросов ΠΊ k8s API:

custom-values.yml

...
 prometheus.yml:
...
scrape_configs:
 - job_name: 'kubernetes-nodes'
 kubernetes_sd_configs:
 - role: node

Role ΠΎΠ±ΡŠΠ΅ΠΊΡ‚, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Π½ΡƒΠΆΠ½ΠΎ Π½Π°ΠΉΡ‚ΠΈ:

  • node
  • endpoints
  • pod
  • service
  • ingress

Prometheus ΠΈΡ‰Π΅Ρ‚ Π½ΠΎΠ΄Ρ‹ Prometheus service discovery

Π’.ΠΊ. сбор ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ prometheus осущСствляСтся ΠΏΠΎΠ²Π΅Ρ€Ρ… стандартного HTTP-ΠΏΡ€ΠΎΡ‚ΠΎΠΊΠΎΠ»Π°, Ρ‚ΠΎ ΠΌΠΎΠ³ΡƒΡ‚ понадобится Π΄ΠΎΠΏΠΎΠ»Π½ΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹Π΅ настройки для бСзопасного доступа ΠΊ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠ°ΠΌ.

НиТС ΠΏΡ€ΠΈΠ²Π΅Π΄Π΅Π½Ρ‹ настройки для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ ΠΈΠ· k8s API

custom-values.yml

...
scheme: https # Π‘Ρ…Π΅ΠΌΠ° ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ - http (default) ΠΈΠ»ΠΈ https
tls_config: # ΠšΠΎΠ½Ρ„ΠΈΠ³ TLS - ΠΊΠΎΡ€Π΅Π²ΠΎΠΉ сСртификат сСрвСра для ΠΏΡ€ΠΎΠ²Π΅Ρ€ΠΊΠΈ достовСрности сСрвСра
 ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
 insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token # Π’ΠΎΠΊΠ΅Π½ для Π°ΡƒΡ‚Π΅Π½Ρ‚ΠΈΡ„ΠΈΠΊΠ°Ρ†ΠΈΠΈ Π½Π° сСрвСрС
custom-values.yml

...
#Kubernetes nodes
relabel_configs:    #  ΠΏΡ€Π΅ΠΎΠ±Ρ€Π°Π·ΠΎΠ²Π°Ρ‚ΡŒ всС k8s Π»Π΅ΠΉΠ±Π»Ρ‹ Ρ‚Π°Ρ€Π³Π΅Ρ‚Π° Π² Π»Π΅ΠΉΠ±Π»Ρ‹ prometheus
- action: labelmap
  regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__   #  ΠŸΠΎΠΌΠ΅Π½ΡΡ‚ΡŒ Π»Π΅ΠΉΠ±Π» для адрСса сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ
  replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
  regex: (.+)
  target_label: __metrics_path__    #  ΠŸΠΎΠΌΠ΅Π½ΡΡ‚ΡŒ Π»Π΅ΠΉΠ±Π» для ΠΏΡƒΡ‚ΠΈ сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ
  replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor 

ΠœΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ

ВсС Π½Π°ΠΉΠ΄Π΅Π½Π½Ρ‹Π΅ Π½Π° эндпоинтах ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ сразу ΠΆΠ΅ отобразятся Π² спискС (Π²ΠΊΠ»Π°Π΄ΠΊΠ° Graph). ΠœΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ Cadvisor Π½Π°Ρ‡ΠΈΠ½Π°ΡŽΡ‚ΡΡ с container_.

Cardvisor Metics

Cadvisor собираСт лишь ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ ΠΎ ΠΏΠΎΡ‚Ρ€Π΅Π±Π»Π΅Π½ΠΈΠΈ рСсурсов ΠΈ ΠΏΡ€ΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΠΈ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹Ρ… docker-ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ². ΠŸΡ€ΠΈ этом ΠΎΠ½ Π½ΠΈΡ‡Π΅Π³ΠΎ Π½Π΅ Π·Π½Π°Π΅Ρ‚ ΠΎ сущностях k8s (Π΄Π΅ΠΏΠ»ΠΎΠΉΠΌΠ΅Π½Ρ‚Ρ‹, рСпликасСты, ...).

Для сбора этой ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ Π±ΡƒΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ сСрвис kube-state-metrics. Он Π²Ρ…ΠΎΠ΄ΠΈΡ‚ Π² Ρ‡Π°Ρ€Ρ‚ Prometheus. Π’ΠΊΠ»ΡŽΡ‡ΠΈΠΌ Π΅Π³ΠΎ.

prometheus/custom_values.yml 

...
kubeStateMetrics:
 ## If false, kube-state-metrics will not be installed
 ##
 enabled: true 

Обновим Ρ€Π΅Π»ΠΈΠ·: $ helm upgrade prom . -f custom_values.yml --install

Π’ Targets Targets After Update

Π’ Graph Graph After Update

По Π°Π½Π°Π»ΠΎΠ³ΠΈΠΈ с kube_state_metrics Π²ΠΊΠ»ΡŽΡ‡ΠΈΠΌ (enabled: true) ΠΏΠΎΠ΄Ρ‹ node-exporter Π² custom_values.yml:

prometheus/custom_values.yml 

...
nodeExporter:
  enabled: true

Обновим Ρ€Π΅Π»ΠΈΠ·: $ helm upgrade prom . -f custom_values.yml --install

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΈΠΌ, Ρ‡Ρ‚ΠΎ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ Π½Π°Ρ‡Π°Π»ΠΈ ΡΠΎΠ±ΠΈΡ€Π°Ρ‚ΡŒΡΡ с Π½ΠΈΡ…. Node-exporter metrics

Запустим ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΈΠ· helm Ρ‡Π°Ρ€Ρ‚Π° reddit:

$ cd kubernetes/charts
$ helm upgrade reddit-test ./reddit --install
$ helm upgrade production --namespace production ./reddit --install
$ helm upgrade staging --namespace staging ./reddit β€”-install

РаньшС ΠΌΡ‹ "Ρ…Π°Ρ€Π΄ΠΊΠΎΠ΄ΠΈΠ»ΠΈ" адрСса/dns-ΠΈΠΌΠ΅Π½Π° Π½Π°ΡˆΠΈΡ… ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ с Π½ΠΈΡ….

prometheus.yml

 - job_name: 'ui'
   static_configs:
     - targets:
       - 'ui:9292'

- job_name: 'comment'
  static_configs:
    - targets:
      - 'comment:9292'

Π’Π΅ΠΏΠ΅Ρ€ΡŒ ΠΌΡ‹ ΠΌΠΎΠΆΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ ΠΌΠ΅Ρ…Π°Π½ΠΈΠ·ΠΌ ServiceDiscovery для обнаруТСния ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ, Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½Ρ‹Ρ… Π² k8s.

ΠŸΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ Π±ΡƒΠ΄Π΅ΠΌ ΠΈΡΠΊΠ°Ρ‚ΡŒ Ρ‚Π°ΠΊ ΠΆΠ΅, ΠΊΠ°ΠΊ ΠΈ слуТСбныС сСрвисы k8s. ΠœΠΎΠ΄Π΅Ρ€Π½ΠΈΠ·ΠΈΡ€ΡƒΠ΅ΠΌ ΠΊΠΎΠ½Ρ„ΠΈΠ³ prometheus:

custom_values.yml

  - job_name: 'reddit-endpoints'
    kubernetes_sd_configs:
      - role: endpoints
    relabel_configs:
      - source_labels: [__meta_kubernetes_service_label_app]
        action: keep  #Π˜ΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌ дСйствиС keep, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΎΡΡ‚Π°Π²ΠΈΡ‚ΡŒ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ эндпоинты сСрвисов с ΠΌΠ΅Ρ‚ΠΊΠ°ΠΌΠΈ β€œapp=reddit”
        regex: reddit

ΠžΠ±Π½ΠΎΠ²ΠΈΡ‚Π΅ Ρ€Π΅Π»ΠΈΠ· prometheus: $ helm upgrade prom . -f custom_values.yml --install SD applying for reddit-endpoints discovery

ΠœΡ‹ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΠ»ΠΈ эндпоинты, Π½ΠΎ Ρ‡Ρ‚ΠΎ это Π·Π° ΠΏΠΎΠ΄Ρ‹ ΠΌΡ‹ Π½Π΅ Π·Π½Π°Π΅ΠΌ. Π”ΠΎΠ±Π°Π²ΠΈΠΌ ΠΌΠ΅Ρ‚ΠΊΠΈ k8s. ВсС Π»Π΅ΠΉΠ±Π»Ρ‹ ΠΈ Π°Π½Π½ΠΎΡ‚Π°Ρ†ΠΈΠΈ k8s ΠΈΠ·Π½Π°Ρ‡Π°Π»ΡŒΠ½ΠΎ ΠΎΡ‚ΠΎΠ±Ρ€Π°ΠΆΠ°ΡŽΡ‚ΡΡ Π² prometheus Π² Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π΅:

__meta_kubernetes_service_label_labelname
__meta_kubernetes_service_annotation_annotationname 
custom_values.yml

  relabel_configs:
    - action: labelmap  #ΠžΡ‚ΠΎΠ±Ρ€Π°Π·ΠΈΡ‚ΡŒ всС совпадСния Π³Ρ€ΡƒΠΏΠΏ ΠΈΠ· regex Π² label’ы Prometheus
      regex: __meta_kubernetes_service_label_(.+) 

Обновим Ρ€Π΅Π»ΠΈΠ· prometheus: $ helm upgrade prom . -f custom_values.yml --install

Π’Π΅ΠΏΠ΅Ρ€ΡŒ ΠΌΡ‹ Π²ΠΈΠ΄ΠΈΠΌ Π»Π΅ΠΉΠ±Π»Ρ‹ k8s, присвоСнныС POD’ам: K8s PODs labels

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π΅Ρ‰Π΅ label’ы для prometheus ΠΈ ΠΎΠ±Π½ΠΎΠ²ΠΈΠΌ helm-Ρ€Π΅Π»ΠΈΠ·. Π’.ΠΊ. ΠΌΠ΅Ρ‚ΠΊΠΈ Π²ΠΈΠ΄Π° _meta* Π½Π΅ ΠΏΡƒΠ±Π»ΠΈΠΊΡƒΡŽΡ‚ΡΡ, Ρ‚ΠΎ Π½ΡƒΠΆΠ½ΠΎ ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ свои, пСрСнСся Π² Π½ΠΈΡ… ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ:

custom_values.yml

...
- source_labels: [__meta_kubernetes_namespace]
  target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
  target_label: kubernetes_name
...

Обновим Ρ€Π΅Π»ΠΈΠ· prometheus: Added labels for prometheus

БСйчас ΠΌΡ‹ собираСм ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ со всСх сСрвисов reddit’а Π² 1 Π³Ρ€ΡƒΠΏΠΏΠ΅ target-ΠΎΠ². ΠœΡ‹ ΠΌΠΎΠΆΠ΅ΠΌ ΠΎΡ‚Π΄Π΅Π»ΠΈΡ‚ΡŒ target-Ρ‹ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚ Π΄Ρ€ΡƒΠ³ ΠΎΡ‚ Π΄Ρ€ΡƒΠ³Π° (ΠΏΠΎ окруТСниям, ΠΏΠΎ самим ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Π°ΠΌ), Π° Ρ‚Π°ΠΊΠΆΠ΅ Π²Ρ‹ΠΊΠ»ΡŽΡ‡Π°Ρ‚ΡŒ ΠΈ Π²ΠΊΠ»ΡŽΡ‡Π°Ρ‚ΡŒ ΠΎΠΏΡ†ΠΈΡŽ ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Π° для Π½ΠΈΡ… с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ всС Ρ‚Π΅Ρ… ΠΆΠ΅ labelΠΎΠ². НапримСр, Π΄ΠΎΠ±Π°Π²ΠΈΠΌ Π² ΠΊΠΎΠ½Ρ„ΠΈΠ³ Π΅Ρ‰Π΅ 1 job:

custom_values.yml

...
- job_name: 'reddit-production'
   kubernetes_sd_configs:
     - role: endpoints
   relabel_configs:
     - action: labelmap
       regex: __meta_kubernetes_service_label_(.+)
     - source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_namespace]  # Для Ρ€Π°Π·Π½Ρ‹Ρ… Π»Π΅ΠΉΠ±Π»ΠΎΠ²
       action: keep
       regex: reddit;(production|staging)+                                                # Ρ€Π°Π·Π½Ρ‹Π΅ рСгСкспы
     - source_labels: [__meta_kubernetes_namespace]
       target_label: kubernetes_namespace
     - source_labels: [__meta_kubernetes_service_name]
       target_label: kubernetes_name
...

Обновим Ρ€Π΅Π»ΠΈΠ· prometheus ΠΈ посмотрим: no information

ΠœΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ Π±ΡƒΠ΄ΡƒΡ‚ ΠΎΡ‚ΠΎΠ±Ρ€Π°ΠΆΠ°Ρ‚ΡŒΡΡ для всСх инстансов ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ: no information

РазобьСм ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΡŽ job’а reddit-endpoints Ρ‚Π°ΠΊ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Π±Ρ‹Π»ΠΎ 3 job’а для ΠΊΠ°ΠΆΠ΄ΠΎΠΉ ΠΈΠ· ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚ ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ (post-endpoints, commenten-dpoints, ui-endpoints), Π° reddit-endpoints ΡƒΠ±Π΅Ρ€Π΅ΠΌ:

custom_values.yml

...
      - job_name: 'post-endpoints'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_component,__meta_kubernetes_namespace]
            action: keep
            regex: post;(production|staging)+
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_service_name]
            target_label: kubernetes_name

      - job_name: 'ui-endpoints'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_component,__meta_kubernetes_namespace]
            action: keep
            regex: ui;(production|staging)+
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_service_name]
            target_label: kubernetes_name

      - job_name: 'comment-endpoints'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_component,__meta_kubernetes_namespace]
            action: keep
            regex: comment;(production|staging)+
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_service_name]
            target_label: kubernetes_name
...

Splitted configuration

Визуализация

ΠŸΠΎΡΡ‚Π°Π²ΠΈΠΌ Ρ‚Π°ΠΊΠΆΠ΅ grafana с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ helm:

helm upgrade --install grafana stable/grafana --set "adminPassword=admin" \
  --set "service.type=NodePort" \
  --set "ingress.enabled=true" \
  --set "ingress.hosts={reddit-grafana}"

ΠŸΠ΅Ρ€Π΅ΠΉΠ΄Π΅ΠΌ Π½Π° http://reddit-grafana/

Grapafana welcome screen

Π”ΠΎΠ±Π°Π²ΠΈΠΌ prometheus data-source Adding prometheus datasource

АдрСс Π½Π°ΠΉΠ΄Π΅ΠΌ ΠΈΠ· ΠΈΠΌΠ΅Π½ΠΈ сСрвиса prometheus сСрвСра

$ kubectl get svc
NAME                                  TYPE          CLUSTER-IP    EXTERNAL-IP   PORT(S)                     AGE
grafana-grafana                       NodePort      10.11.252.216 <none>        80:31886/TCP                22m
kubernetes                            ClusterIP     10.11.240.1   <none>        443/TCP                     22d
nginx-nginx-ingress-controller        LoadBalancer  10.11.243.76  104.154.94.52 80:32293/TCP,443:30193/TCP  7h
nginx-nginx-ingress-default-backend   ClusterIP     10.11.248.132 <none>        80/TCP                      7h
prom-prometheus-server                LoadBalancer  10.11.247.75  35.224.121.85 80:30282/TCP                4d

Π”ΠΎΠ±Π°Π²ΠΈΠΌ самый распространСнный dashboard для отслСТивания состояния рСсурсов k8s. Π’Ρ‹Π±Π΅Ρ€Π΅ΠΌ datasource: Selecting a Prometheus data source

Grafana Dashboard: Kubernetes cluster monitoring

Π”ΠΎΠ±Π°Π²ΠΈΠΌ собствСнныС Π΄Π°ΡˆΠ±ΠΎΡ€Π΄Ρ‹, созданныС Ρ€Π°Π½Π΅Π΅ (Π² Π”Π— ΠΏΠΎ ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Ρƒ). Они Π΄ΠΎΠ»ΠΆΠ½Ρ‹ Ρ‚Π°ΠΊΠΆΠ΅ ΡƒΡΠΏΠ΅ΡˆΠ½ΠΎ ΠΎΡ‚ΠΎΠ±Ρ€Π°Π·ΠΈΡ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅:

Grafana Dashboard: Docker and system monitoring

Templating

Π’ Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ‚ Π½Π° Π³Ρ€Π°Ρ„ΠΈΠΊΠ°Ρ…, относящихся ΠΊ ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡŽ, ΠΎΠ΄Π½ΠΎΠ²Ρ€Π΅ΠΌΠ΅Π½Π½ΠΎ ΠΎΡ‚ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½Ρ‹ значСния ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ со всСх источников сразу. ΠŸΡ€ΠΈ большом количСствС срСд ΠΈ ΠΏΡ€ΠΈ ΠΈΡ… Π΄ΠΈΠ½Π°ΠΌΠΈΡ‡Π½ΠΎΠΌ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΈ ΠΈΠΌΠ΅Π΅Ρ‚ смысл ΡΠ΄Π΅Π»Π°Ρ‚ΡŒ Π΄ΠΈΠ½Π°ΠΌΠΈΡ‡Π½ΠΎΠΉ ΠΈ ΡƒΠ΄ΠΎΠ±Π½ΠΎ настройку Π½Π°ΡˆΠΈΡ… Π΄Π°ΡˆΠ±ΠΎΡ€Π΄ΠΎΠ² Π² Grafana.

Π‘Π΄Π΅Π»Π°Ρ‚ΡŒ это ΠΌΠΎΠΆΠ½ΠΎ Π² нашСм случаС с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ ΠΌΠ΅Ρ…Π°Π½ΠΈΠ·ΠΌΠ° templating’а Dashboard Templating

Π£ нас появился список со значСниями ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½ΠΎΠΉ namespace. Пока Ρ‡Ρ‚ΠΎ ΠΎΠ½ бСсполСзСн. Π§Ρ‚ΠΎΠ±Ρ‹ ΠΈΡ… использованиС ΠΈΠΌΠ΅Π»ΠΎ эффСкт Π½ΡƒΠΆΠ½ΠΎ ΡˆΠ°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ запросы ΠΊ Prometheus.

Dashboard Templating

Π’Π΅ΠΏΠ΅Ρ€ΡŒ ΠΌΡ‹ ΠΌΠΎΠΆΠ΅ΠΌ Π½Π°ΡΡ‚Ρ€Π°ΠΈΠ²Π°Ρ‚ΡŒ ΠΎΠ±Ρ‰ΠΈΠ΅ ΡˆΠ°Π±Π»ΠΎΠ½Ρ‹ Π³Ρ€Π°Ρ„ΠΈΠΊΠΎΠ² ΠΈ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… ΠΌΠ΅Π½ΡΡ‚ΡŒ Π² Π½ΠΈΡ… Π½ΡƒΠΆΠ½Ρ‹Π΅ Π½Π°ΠΌ поля (Π² нашСм случаС это namespace). Dashboard Templating: All Namespaces Dashboard Templating: Only Production Namespace

ΠŸΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€ΠΈΠ·ΡƒΠ΅ΠΌ всС Dashboard’ы, ΠΎΡ‚Ρ€Π°ΠΆΠ°ΡŽΡ‰ΠΈΠ΅ ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Ρ‹ Ρ€Π°Π±ΠΎΡ‚Ρ‹ прилоТСния (созданныС Π² ΠΏΡ€Π΅Π΄Ρ‹Π΄ΡƒΡ‰ΠΈΡ… Π”Π—) reddit для Ρ€Π°Π±ΠΎΡ‚Ρ‹ с нСсколькими окруТСниями (нСймспСйсами).

Π‘ΠΌΠ΅ΡˆΠ°Π½Π½Ρ‹Π΅ Π³Ρ€Π°Ρ„ΠΈΠΊΠΈ

Π˜ΠΌΠΏΠΎΡ€Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ Π΄Π°ΡˆΠ±ΠΎΠ°Ρ€Π΄: https://grafana.com/dashboards/741.

На этом Π³Ρ€Π°Ρ„ΠΈΠΊΠ΅ ΠΎΠ΄Π½ΠΎΠ²Ρ€Π΅ΠΌΠ΅Π½Π½ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡŽΡ‚ΡΡ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ ΠΈ ΡˆΠ°Π±Π»ΠΎΠ½Ρ‹ ΠΈΠ· cAdvisor, ΠΈ ΠΈΠ· kube-state-metrics для отобраТСния сводной ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ ΠΏΠΎ Π΄Π΅ΠΏΠ»ΠΎΠΉΠΌΠ΅Π½Ρ‚Π°ΠΌ.

Kubernetes Deployment metrics dashboards

Homework 24 (kubernetes-4)

Build Status

Helm

Helm - ΠΏΠ°ΠΊΠ΅Ρ‚Π½Ρ‹ΠΉ ΠΌΠ΅Π½Π΅Π΄ΠΆΠ΅Ρ€ для Kubernetes. Π‘ Π΅Π³ΠΎ ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ ΠΌΡ‹ Π±ΡƒΠ΄Π΅ΠΌ:

  1. Π‘Ρ‚Π°Π½Π΄Π°Ρ€Ρ‚ΠΈΠ·ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ поставку прилоТСния Π² Kubernetes
  2. Π”Π΅ΠΊΠ»Π°Ρ€ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ инфраструктуру
  3. Π”Π΅ΠΏΠ»ΠΎΠΈΡ‚ΡŒ Π½ΠΎΠ²Ρ‹Π΅ вСрсии прилоТСния

Helm - ΠΊΠ»ΠΈΠ΅Π½Ρ‚-сСрвСрноС ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅. Установим Π΅Π³ΠΎ ΠΊΠ»ΠΈΠ΅Π½Ρ‚ΡΠΊΡƒΡŽ Ρ‡Π°ΡΡ‚ΡŒ - ΠΊΠΎΠ½ΡΠΎΠ»ΡŒΠ½Ρ‹ΠΉ ΠΊΠ»ΠΈΠ΅Π½Ρ‚ Helm:

$ brew install kubernetes-helm

Helm Ρ‡ΠΈΡ‚Π°Π΅Ρ‚ ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΡŽ kubectl (~/.kube/config) ΠΈ опрСдСляСт Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠΉ контСкст (кластСр, ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»ΡŒ, namespace). Для смСны кластСра: $ kubectl config set-context, Π»ΠΈΠ±ΠΎ Π΄ΠΎΠ³Ρ€ΡƒΠ·ΠΊΠ° helm’у собствСнного config-Ρ„Π°ΠΉΠ»Π° с Ρ„Π»Π°Π³ΠΎΠΌ --kube-context.

Установим ΡΠ΅Ρ€Π²Π΅Ρ€Π½ΡƒΡŽ Ρ‡Π°ΡΡ‚ΡŒ Helm’а - Tiller. Tiller - это Π°Π΄Π΄ΠΎΠ½ Kubernetes, Ρ‚.Π΅. Pod, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ общаСтся с API Kubernetes. Для этого понадобится Π΅ΠΌΡƒ Π²Ρ‹Π΄Π°Ρ‚ΡŒ ServiceAccount ΠΈ Π½Π°Π·Π½Π°Ρ‡ΠΈΡ‚ΡŒ Ρ€ΠΎΠ»ΠΈ RBAC, Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΡ‹Π΅ для Ρ€Π°Π±ΠΎΡ‚Ρ‹.

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ tiller.yml ΠΈ помСстим Π² Π½Π΅Π³ΠΎ манифСст:

tiller.yml 

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

ΠŸΡ€ΠΈΠΌΠ΅Π½ΠΈΠΌ: $ kubectl apply -f tiller.yml

Запуск tiller-сСрвСра: $ helm init --service-account tiller

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΊΠ°: $ kubectl get pods -n kube-system --selector app=helm

output

NAME                             READY   STATUS    RESTARTS   AGE
tiller-deploy-689d79895f-tmhlp   1/1     Running   0          51s

Charts

Chart - это ΠΏΠ°ΠΊΠ΅Ρ‚ Π² Helm.

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΡŽ Charts Π² ΠΏΠ°ΠΏΠΊΠ΅ kubernetes со ΡΠ»Π΅Π΄ΡƒΡŽΡ‰Π΅ΠΉ структурой Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΠΉ:

└─ Charts
   β”œβ”€ comment
   β”œβ”€ post
   β”œβ”€ reddit
   └─ ui

НачнСм Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΊΡƒ Chart’а для ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹ ui прилоТСния.

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Ρ„Π°ΠΉΠ»-описаниС chart’а:

ui/Chart.yaml

---
name: ui
version: 1.0.0
description: OTUS reddit application UI
maintainers:
  - name: Vyacheslav Egorov
    email: 692677@mail.ru
appVersion: 1.0 

Π—Π½Π°Ρ‡ΠΈΠΌΡ‹ΠΌΠΈ ΡΠ²Π»ΡΡŽΡ‚ΡΡ поля name ΠΈ version. ΠžΡ‚ Π½ΠΈΡ… зависит Ρ€Π°Π±ΠΎΡ‚Π° Helm’а с Chart’ом. ΠžΡΡ‚Π°Π»ΡŒΠ½ΠΎΠ΅ - описания.

Templates

ΠžΡΠ½ΠΎΠ²Π½Ρ‹ΠΌ содСрТимым Chart’ов ΡΠ²Π»ΡΡŽΡ‚ΡΡ ΡˆΠ°Π±Π»ΠΎΠ½Ρ‹ манифСстов Kubernetes.

  1. Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΡŽ ui/templates
  2. ΠŸΠ΅Ρ€Π΅Π½Π΅ΡΠ΅ΠΌ Π² Π½Π΅Ρ‘ всС манифСсты, Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚Π°Π½Π½Ρ‹Π΅ Ρ€Π°Π½Π΅Π΅ для сСрвиса ui (ui-service, ui-deployment, ui-ingress)
  3. ΠŸΠ΅Ρ€Π΅ΠΈΠΌΠ΅Π½ΡƒΠ΅ΠΌ манифСсты (ΡƒΠ±Π΅Ρ€Π΅ΠΌ прСфикс β€œui-β€œ) ΠΈ помСняСм Ρ€Π°ΡΡˆΠΈΡ€Π΅Π½ΠΈΠ΅ Π½Π° .yaml) - стилистичСскиС ΠΏΡ€Π°Π²ΠΊΠΈ
└── ui
    β”œβ”€β”€ Chart.yaml
    └── templates
        β”œβ”€β”€ deployment.yaml
        β”œβ”€β”€ ingress.yaml
        └── service.yaml

По-сути, это ΡƒΠΆΠ΅ Π³ΠΎΡ‚ΠΎΠ²Ρ‹ΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚ для установки Π² Kubernetes.

  1. УбСдимся, Ρ‡Ρ‚ΠΎ Ρƒ нас Π½Π΅ Ρ€Π°Π·Π²Π΅Ρ€Π½ΡƒΡ‚Ρ‹ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹ прилоТСния Π² kubernetes. Если Ρ€Π°Π·Π²Π΅Ρ€Π½ΡƒΡ‚Ρ‹ - ΡƒΠ΄Π°Π»ΠΈΠΌ ΠΈΡ…
kubectl delete service ui -n dev
kubectl delete deploy ui -n dev
kubectl delete ingress ui -n dev
  1. Установим Chart: helm install --name test-ui-1 ui/

здСсь test-ui-1 - имя Ρ€Π΅Π»ΠΈΠ·Π°;

ui/ - ΠΏΡƒΡ‚ΡŒ Π΄ΠΎ Chart'Π°.

  1. ΠŸΠΎΡΠΌΠΎΡ‚Ρ€ΠΈΠΌ, Ρ‡Ρ‚ΠΎ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΠ»ΠΎΡΡŒ: helm ls

output

NAME      REVISION  UPDATED                    STATUS   CHART     APP VERSION  NAMESPACE
test-ui-1 1         Wed Jan 30 21:38:50 2019   DEPLOYED ui-1.0.0  1            default

Π’Π΅ΠΏΠ΅Ρ€ΡŒ сдСлаСм Ρ‚Π°ΠΊ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΌΠΎΠΆΠ½ΠΎ Π±Ρ‹Π»ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ 1 Chart для запуска Π½Π΅ΡΠΊΠΎΠ»ΡŒΠΊΠΈΡ… экзСмпляров (Ρ€Π΅Π»ΠΈΠ·ΠΎΠ²). Π¨Π°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡ€ΡƒΠ΅ΠΌ Π΅Π³ΠΎ:

ui/templates/service.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    app: reddit
    component: ui
    release: {{ .Release.Name }}
spec:
  type: NodePort
  ports:
  - port: 9292
    protocol: TCP
    targetPort: 9292
  selector:
    app: reddit
    component: ui
    release: {{ .Release.Name }}
ui/templates/service.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}  # ΡƒΠ½ΠΈΠΊΠ°Π»ΡŒΠ½ΠΎΠ΅ имя Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½ΠΎΠ³ΠΎ рСсурса
  labels:
    app: reddit
    component: ui
    release: {{ .Release.Name }}  # ΠΏΠΎΠΌΠ΅Ρ‡Π°Π΅ΠΌ, Ρ‡Ρ‚ΠΎ сСрвис ΠΈΠ· ΠΊΠΎΠ½ΠΊΡ€Π΅Ρ‚Π½ΠΎΠ³ΠΎ Ρ€Π΅Π»ΠΈΠ·Π°
spec:
  type: NodePort
  ports:
  - port: {{ .Values.service.externalPort }}
    protocol: TCP
    targetPort: 9292
  selector:
    app: reddit
    component: ui
    release: {{ .Release.Name }} # Π’Ρ‹Π±ΠΈΡ€Π°Π΅ΠΌ POD-Ρ‹ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΈΠ· этого Ρ€Π΅Π»ΠΈΠ·Π°

name: {{ .Release.Name }}-{{ .Chart.Name }}

Π—Π΄Π΅ΡΡŒ ΠΌΡ‹ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌ встроСнныС ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Π΅:

  • .Release - Π³Ρ€ΡƒΠΏΠΏΠ° ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… с ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠ΅ΠΉ ΠΎ Ρ€Π΅Π»ΠΈΠ·Π΅ (ΠΊΠΎΠ½ΠΊΡ€Π΅Ρ‚Π½ΠΎΠΌ запускС Chart’а Π² k8s)
  • .Chart - Π³Ρ€ΡƒΠΏΠΏΠ° ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… с ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠ΅ΠΉ ΠΎ Chart’С (содСрТимоС Ρ„Π°ΠΉΠ»Π° Chart.yaml)

Π’Π°ΠΊΠΆΠ΅ Π΅Ρ‰Π΅ Π΅ΡΡ‚ΡŒ Π³Ρ€ΡƒΠΏΠΏΡ‹ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ…:

  • .Template - информация ΠΎ Ρ‚Π΅ΠΊΡƒΡ‰Π΅ΠΌ шаблонС (.Name ΠΈ .BasePath)
  • .Capabilities - информация ΠΎ Kubernetes (вСрсия, вСрсии API)
  • .Files.Get - ΠΏΠΎΠ»ΡƒΡ‡ΠΈΡ‚ΡŒ содСрТимоС Ρ„Π°ΠΉΠ»Π°

Π¨Π°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡ€ΡƒΠ΅ΠΌ ΠΏΠΎΠ΄ΠΎΠ±Π½Ρ‹ΠΌ ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ ΠΎΡΡ‚Π°Π»ΡŒΠ½Ρ‹Π΅ сущности

ui/templates/deployment.yaml

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    app: reddit
    component: ui
    release: {{ .Release.Name }}
spec:
...
  selector:
    matchLabels:
      app: reddit       # Π’Π°ΠΆΠ½ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ selector deployment’а
      component: ui     # нашСл Ρ‚ΠΎΠ»ΡŒΠΊΠΎ Π½ΡƒΠΆΠ½Ρ‹Π΅ POD’ы
      release: {{ .Release.Name }}
  template:
    metadata:
      name: ui-pod
      labels:
        app: reddit
        component: ui
        release: {{ .Release.Name }}  
    spec:
      containers:
      - image: ozyab/ui
        name: ui
        ports:
        - containerPort: 9292
          name: ui
          protocol: TCP
        env:
        - name: ENV
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
ui/templates/ingress.yaml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  annotations:
    kubernetes.io/ingress.class: "gce"
spec:
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: {{ .Release.Name }}-{{ .Chart.Name }}
          servicePort: 9292

Установим нСсколько Ρ€Π΅Π»ΠΈΠ·ΠΎΠ² ui:

$ helm install ui --name ui-1 
$ helm install ui --name ui-2
$ helm install ui --name ui-3

Π”ΠΎΠ»ΠΆΠ½Ρ‹ ΠΏΠΎΡΠ²ΠΈΡ‚ΡŒΡΡ 3 ингрСсса: $ kubectl get ingress

output

NAME      HOSTS   ADDRESS          PORTS   AGE
ui-1-ui   *       35.201.126.86    80      5m
ui-2-ui   *       35.201.67.17     80      1m
ui-3-ui   *       35.227.242.231   80      1m

ΠœΡ‹ ΡƒΠΆΠ΅ сдСлали Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡ‚ΡŒ запуска Π½Π΅ΡΠΊΠΎΠ»ΡŒΠΊΠΈΡ… вСрсий ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ ΠΈΠ· ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΏΠ°ΠΊΠ΅Ρ‚Π° манифСстов, ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡ лишь встроСнныС ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Π΅. ΠšΠ°ΡΡ‚ΠΎΠΌΠΈΠ·ΠΈΡ€ΡƒΠ΅ΠΌ установку своими ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹ΠΌΠΈ (ΠΎΠ±Ρ€Π°Π· ΠΈ ΠΏΠΎΡ€Ρ‚).

ui/templates/deployment.yaml

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
...
    spec:
      containers:
      - image: "{{ .Values.image.repository }}/ui:{{ .Values.image.tag }}"
        name: ui
        ports:
        - containerPort:  {{ .Values.service.internalPort }} 
ui/templates/service.yaml

---
apiVersion: v1
kind: Service
metadata:
...
spec:
  type: NodePort
  ports:
  - port: {{ .Values.service.externalPort }} 
    protocol: TCP
    targetPort: {{ .Values.service.internalPort }} 
  selector:
    app: reddit
    component: ui
    release: {{ .Release.Name }}
ui/templates/ingress.yaml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  annotations:
    kubernetes.io/ingress.class: "gce"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: {{ .Release.Name }}-{{ .Chart.Name }}
          servicePort: {{ .Values.service.externalPort }}

ΠžΠΏΡ€Π΅Π΄Π΅Π»ΠΈΠΌ значСния собствСнных ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… Π² Ρ„Π°ΠΉΠ»Π΅ values.yaml:

ui/values.yaml

---
service:
  internalPort: 9292
  externalPort: 9292
image:
  repository: ozyab/ui
  tag: latest

МоТно произвСсти ΠΎΠ±Π½ΠΎΠ²Π»Π΅Π½ΠΈΠ΅ сСрвиса:

helm upgrade ui-1 ui/ 
helm upgrade ui-2 ui/ 
helm upgrade ui-3 ui/ 

ΠœΡ‹ собрали Chart для развСртывания ui-ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹ прилоТСния. Он ΠΈΠΌΠ΅Π΅Ρ‚ ΡΠ»Π΅Π΄ΡƒΡŽΡ‰ΡƒΡŽ структуру:

└── ui
    β”œβ”€β”€ Chart.yaml
    β”œβ”€β”€ templates
    β”‚   β”œβ”€β”€ deployment.yaml
    β”‚   β”œβ”€β”€ ingress.yaml
    β”‚   └── service.yaml
    └── values.yaml 

Π‘ΠΎΠ±Π΅Ρ€Π΅ΠΌ ΠΏΠ°ΠΊΠ΅Ρ‚Ρ‹ для ΠΎΡΡ‚Π°Π»ΡŒΠ½Ρ‹Ρ… ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚:

post/templates/service.yaml

---
apiVersion: v1
kind: Service
metadata:
  name:  {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    app: reddit
    component: post
    release: {{ .Release.Name }} 
spec:
  ports:
  - port:  {{ .Values.service.externalPort }} 
    protocol: TCP
    targetPort: {{ .Values.service.internalPort }} 
  selector:
    app: reddit
    component: post
    release: {{ .Release.Name }}
post/templates/deployment.yaml 

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    app: reddit
    component: post
    release: {{ .Release.Name }}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reddit
      component: post
      release: {{ .Release.Name }}
  template:
    metadata:
      name: post
      labels:
        app: reddit
        component: post
        release: {{ .Release.Name }}
    spec:
      containers:
      - image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        name: post
        ports:
        - containerPort: {{ .Values.service.internalPort }}
          name: post
          protocol: TCP
        env:
        - name: POST_DATABASE_HOST
          value: postdb

ΠžΠ±Ρ€Π°Ρ‚ΠΈΠΌ Π²Π½ΠΈΠΌΠ°Π½ΠΈΠ΅ Π½Π° адрСс Π‘Π”:

env:
  - name: POST_DATABASE_HOST
    value: postdb

ΠŸΠΎΡΠΊΠΎΠ»ΡŒΠΊΡƒ адрСс Π‘Π” ΠΌΠΎΠΆΠ΅Ρ‚ ΠΌΠ΅Π½ΡΡ‚ΡŒΡΡ Π² зависимости ΠΎΡ‚ условий запуска:

  • Π±Π΄ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½ΠΎ ΠΎΡ‚ кластСра
  • Π±Π΄ Π·Π°ΠΏΡƒΡ‰Π΅Π½ΠΎ Π² ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½ΠΎΠΌ Ρ€Π΅Π»ΠΈΠ·Π΅
  • ... Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΡƒΠ΄ΠΎΠ±Π½Ρ‹ΠΉ шаблон для задания адрСса Π‘Π”.
env:
  - name: POST_DATABASE_HOST
   value: {{ .Values.databaseHost }}

Π‘ΡƒΠ΄Π΅ΠΌ Π·Π°Π΄Π°Π²Π°Ρ‚ΡŒ Π±Π΄ Ρ‡Π΅Ρ€Π΅Π· ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½ΡƒΡŽ databaseHost. Иногда Π»ΡƒΡ‡ΡˆΠ΅ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ ΠΏΠΎΠ΄ΠΎΠ±Π½Ρ‹ΠΉ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… вмСсто структур database.host, Ρ‚Π°ΠΊ ΠΊΠ°ΠΊ Ρ‚ΠΎΠ³Π΄Π° прийдСтся ΠΎΠΏΡ€Π΅Π΄Π΅Π»ΡΡ‚ΡŒ структуру database, ΠΈΠ½Π°Ρ‡Π΅ helm выдаст ΠΎΡˆΠΈΠ±ΠΊΡƒ.

Π˜ΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΡŽ default. Если databaseHost Π½Π΅ Π±ΡƒΠ΄Π΅Ρ‚ ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½Π° ΠΈΠ»ΠΈ Π΅Π΅ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ Π±ΡƒΠ΄Π΅Ρ‚ пустым, Ρ‚ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ Π²Ρ‹Π²ΠΎΠ΄ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ printf (которая просто Ρ„ΠΎΡ€ΠΌΠΈΡ€ΡƒΠ΅Ρ‚ строку <имярСлиза>-mongodb).

value: {{ .Values.databaseHost | default (printf "%s-mongodb" .Release.Name) }} 

Π’ ΠΈΡ‚ΠΎΠ³Π΅ получится:

env:
  - name: POST_DATABASE_HOST
    value: {{ .Values.databaseHost | default (printf "%s-mongodb" .Release.Name) }}

Если databaseHost Π½Π΅ Π·Π°Π΄Π°Π½ΠΎ, Ρ‚ΠΎ Π±ΡƒΠ΄Π΅Ρ‚ использован адрСс Π±Π°Π·Ρ‹, поднятой Π²Π½ΡƒΡ‚Ρ€ΠΈ Ρ€Π΅Π»ΠΈΠ·Π°.

ДокумСнтация ΠΏΠΎ ΡˆΠ°Π±Π»ΠΎΠ½ΠΈΠ·Π°Ρ†ΠΈΡΠΌ ΠΈ функциям.

post/values.yaml

---
service:
  internalPort: 5000
  externalPort: 5000

image:
  repository: ozyab/post
  tag: latest

databaseHost:

Π¨Π°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡ€ΡƒΠ΅ΠΌ сСрвис comment

comment/templates/deployment.yaml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    app: reddit
    component: comment
    release: {{ .Release.Name }}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reddit
      component: comment
      release: {{ .Release.Name }}
  template:
    metadata:
      name: comment
      labels:
        app: reddit
        component: comment
        release: {{ .Release.Name }}
    spec:
      containers:
      - image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        name: comment
        ports:
        - containerPort: {{ .Values.service.internalPort }}
          name: comment
          protocol: TCP
        env:
        - name: COMMENT_DATABASE_HOST
          value: {{ .Values.databaseHost | default (printf "%s-mongodb" .Release.Name) }}
comment/templates/service.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    app: reddit
    component: comment
    release: {{ .Release.Name }}
spec:
  type: ClusterIP
  ports:
  - port: {{ .Values.service.externalPort }}
    protocol: TCP
    targetPort: {{ .Values.service.internalPort }}
  selector:
    app: reddit
    component: comment
    release: {{ .Release.Name }}
post/values.yaml

---
service:
  internalPort: 5000
  externalPort: 5000

image:
  repository: ozyab/post
  tag: latest

databaseHost:

Π˜Ρ‚ΠΎΠ³ΠΎΠ²Π°Ρ структура ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° выглядит Ρ‚Π°ΠΊ:

$ tree
.
β”œβ”€β”€ comment
β”‚   β”œβ”€β”€ Chart.yaml
β”‚   β”œβ”€β”€ templates
β”‚   β”‚   β”œβ”€β”€ deployment.yml
β”‚   β”‚   └── service.yml
β”‚   └── values.yaml
β”œβ”€β”€ post
β”‚   β”œβ”€β”€ Chart.yaml
β”‚   β”œβ”€β”€ templates
β”‚   β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”‚   └── service.yml
β”‚   └── values.yaml
└── ui
    β”œβ”€β”€ Chart.yaml
    β”œβ”€β”€ templates
    β”‚   β”œβ”€β”€ deployment.yaml
    β”‚   β”œβ”€β”€ ingress.yaml
    β”‚   └── service.yaml
    └── values.yaml

Π’Π°ΠΊΠΆΠ΅ стоит ΠΎΡ‚ΠΌΠ΅Ρ‚ΠΈΡ‚ΡŒ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π» helm ΠΏΠΎ использованию helper’ов ΠΈ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ templates. Helper - это написанная Π½Π°ΠΌΠΈ функция. Π’ функция описываСтся, ΠΊΠ°ΠΊ ΠΏΡ€Π°Π²ΠΈΠ»ΠΎ, слоТная Π»ΠΎΠ³ΠΈΠΊΠ°. Π¨Π°Π±Π»ΠΎΠ½Ρ‹ этих функция Ρ€Π°ΡΠΏΠΎΠ»ΠΎΠ³Π°ΡŽΡ‚ΡΡ Π² Ρ„Π°ΠΉΠ»Π΅ _helpers.tpl.

ΠŸΡ€ΠΈΠΌΠ΅Ρ€ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ comment.fullname:

charts/comment/templates/_helpers.tpl

{{- define "comment.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name }}
{{- end -}}

которая Π² Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Π΅ выдаст Ρ‚ΠΎ ΠΆΠ΅, Ρ‡Ρ‚ΠΎ ΠΈ:

{{ .Release.Name }}-{{ .Chart.Name }}

Π—Π°ΠΌΠ΅Π½ΠΈΠΌ Π² ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΠ΅ строчки Π² Ρ„Π°ΠΉΠ»Π΅, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ helper:

charts/comment/templates/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ template "comment.fullname" . }}  # Π±Ρ‹Π»ΠΎ:  {{ .Release.Name }}-{{ .Chart.Name }}

с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ template вызываСтся функция comment.fullname, описанная Ρ€Π°Π½Π΅Π΅ Π² Ρ„Π°ΠΉΠ»Π΅ _helpers.tpl.

Π‘Ρ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€Π° ΠΈΠΏΠΎΡ€Ρ‚ΠΈΡ€ΡƒΡŽΡ‰Π΅ΠΉ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ template: {{ template "comment.fullname" . }} Π³Π΄Π΅ template - функция template

  • comment.fullname - Π½Π°Π·Π²Π°Π½ΠΈΠ΅ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ для ΠΈΠΌΠΏΠΎΡ€Ρ‚Π°
  • β€œ.” - ΠΎΠ±Π»Π°ΡΡ‚ΡŒ видимости для ΠΈΠΌΠΏΠΎΡ€Ρ‚Π°

β€œ.” - вся ΠΎΠ±Π»Π°ΡΡ‚ΡŒ видимости всСх ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… (ΠΌΠΎΠΆΠ½ΠΎ ΠΏΠ΅Ρ€Π΅Π΄Π°Ρ‚ΡŒ .Chart, Ρ‚ΠΎΠ³Π΄Π° .Values Π½Π΅ Π±ΡƒΠ΄ΡƒΡ‚ доступны Π²Π½ΡƒΡ‚Ρ€ΠΈ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ)

  • Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Ρ„Π°ΠΉΠ» _helpers.tpl Π² ΠΏΠ°ΠΏΠΊΠ°Ρ… templates сСрвисов ui, post ΠΈ comment
  • Вставиим Ρ„ΡƒΠ½ΠΊΡ†ΠΈΡŽ β€œ.fullname” Π² ΠΊΠ°ΠΆΠ΄Ρ‹ΠΉ _helpers.tpl Ρ„Π°ΠΉΠ». Π·Π°ΠΌΠ΅Π½ΠΈΡ‚ΡŒ Π½Π° имя Ρ‡Π°Ρ€Ρ‚Π° ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΠ²ΡƒΡŽΡ‰Π΅Π³ΠΎ сСрвиса
  • Π’ ΠΊΠ°ΠΆΠ΄ΠΎΠΌ ΠΈΠ· шаблонов манифСстов вставим ΡΠ»Π΅Π΄ΡƒΡŽΡ‰ΡƒΡŽ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΡŽ Ρ‚Π°ΠΌ, Π³Π΄Π΅ это трСбуСтся (Π±ΠΎΠ»ΡŒΡˆΠΈΠ½ΡΡ‚Π²ΠΎ ΠΏΠΎΠ»Π΅ΠΉ это name:) {{ template "comment.fullname" . }}

Π£ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠ΅ зависимостями

ΠœΡ‹ создали Chart’ы для ΠΊΠ°ΠΆΠ΄ΠΎΠΉ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹ нашСго прилоТСния. ΠšΠ°ΠΆΠ΄Ρ‹ΠΉ ΠΈΠ· Π½ΠΈΡ… ΠΌΠΎΠΆΠ½ΠΎ Π·Π°ΠΏΡƒΡΡ‚ΠΈΡ‚ΡŒ ΠΏΠΎ-ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΠΈ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ $ helm install <chart-path> <release-name>, Π½ΠΎ ΠΎΠ½ΠΈ Π±ΡƒΠ΄ΡƒΡ‚ Π·Π°ΠΏΡƒΡΠΊΠ°Ρ‚ΡŒΡΡ Π² Ρ€Π°Π·Π½Ρ‹Ρ… Ρ€Π΅Π»ΠΈΠ·Π°Ρ…, ΠΈ Π½Π΅ Π±ΡƒΠ΄ΡƒΡ‚ Π²ΠΈΠ΄Π΅Ρ‚ΡŒ Π΄Ρ€ΡƒΠ³ Π΄Ρ€ΡƒΠ³Π°.

Π‘ ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ ΠΌΠ΅Ρ…Π°Π½ΠΈΠ·ΠΌΠ° управлСния зависимостями создадим Π΅Π΄ΠΈΠ½Ρ‹ΠΉ Chart reddit, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ ΠΎΠ±ΡŠΠ΅Π΄ΠΈΠ½ΠΈΡ‚ наши ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹.

Π‘Ρ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€Π° прилоТСния reddit: Reddit app structure

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Ρ„Π°ΠΉΠ»:

reddit/Chart.yaml

name: reddit
version: 0.1.0
description: OTUS sample reddit application
maintainers:
  - name: Vyacheslav Egorov
    email: 692677@mail.ru

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ пустой Ρ„Π°ΠΉΠ» reddit/values.yaml.

Π€Π°ΠΉΠ» с зависимотями:

reddit/requirements.yaml

dependencies:
  - name: ui            # Имя ΠΈ вСрсия Π΄ΠΎΠ»ΠΆΠ½Ρ‹ ΡΠΎΠ²ΠΏΠ°Π΄Π°Ρ‚ΡŒ
    version: "1.0.0"     # с содСраТаниСм ui/Chart.yml
    repository: "file://../ui"  # ΠŸΡƒΡ‚ΡŒ ΠΎΡ‚Π½ΠΎΡΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎ располоТСния самого requiremetns.yml
  - name: post
    version: 1.0.0
    repository: file://../post
  - name: comment
    version: 1.0.0
    repository: file://../comment

НуТно Π·Π°Π³Ρ€ΡƒΠ·ΠΈΡ‚ΡŒ зависимости (ΠΊΠΎΠ³Π΄Π° Chart’ Π½Π΅ ΡƒΠΏΠ°ΠΊΠΎΠ²Π°Π½ Π² tgz Π°Ρ€Ρ…ΠΈΠ²):

$ helm dep update 

Π‘ΡƒΠ΄Π΅Ρ‚ создан Ρ„Π°ΠΉΠ» requirements.lock с фиксациСй зависисмостСй. Π’Π°ΠΊΠΆΠ΅ Π±ΡƒΠ΄Π΅Ρ‚ создана дирСктория с зависимостями Π² Π²ΠΈΠ΄Π΅ Π°Ρ€Ρ…ΠΈΠ²ΠΎΠ².

Π‘Ρ‚Ρ€ΡƒΠΊΡ€ΡƒΡ‚Π° ΠΏΠ°ΠΏΠΎΠΊ:

β”œβ”€β”€ Chart.yaml
β”œβ”€β”€ charts
β”‚   β”œβ”€β”€ comment-1.0.0.tgz
β”‚   β”œβ”€β”€ post-1.0.0.tgz
β”‚   └── ui-1.0.0.tgz
β”œβ”€β”€ requirements.lock
β”œβ”€β”€ requirements.yaml
└── values.yaml

Chart для Π±Π°Π·Ρ‹ Π΄Π°Π½Π½Ρ‹Ρ… Π½Π΅ Π±ΡƒΠ΄Π΅ΠΌ ΡΠΎΠ·Π΄Π°Π²Π°Ρ‚ΡŒ Π²Ρ€ΡƒΡ‡Π½ΡƒΡŽ. Π’ΠΎΠ·ΡŒΠΌΠ΅ΠΌ Π³ΠΎΡ‚ΠΎΠ²Ρ‹ΠΉ.

НайдСм Chart Π² общСдоступном Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΈ: $ helm search mongo

output

NAME                       CHART VERSION   APP VERSION  DESCRIPTION
stable/mongodb             5.3.1           4.0.5        NoSQL document-oriented database that stores JSON-like do...
stable/mongodb-replicaset  3.9.0           3.6          NoSQL document-oriented database that stores JSON-like do...

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π² reddit/requirements.yml:

reddit/requirements.yml

dependencies:
...
  - name: comment
    version: 1.0.0
    repository: file://../comment
  - name: mongodb
    version: 0.4.18
    repository: https://kubernetes-charts.storage.googleapis.com

Π’Ρ‹Π³Ρ€ΡƒΠ·ΠΈΠΌ зависимости: helm dep update

Установим ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅: kubernetes/Charts $ helm install reddit --name reddit-test

НайдСм IP-адрСс прилоТСния: kubectl get ingress

NAME             HOSTS   ADDRESS          PORTS   AGE
reddit-test-ui   *       35.201.126.86    80      1m

Charts-based app runned

Π•ΡΡ‚ΡŒ ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌΠ° с Ρ‚Π΅ΠΌ, Ρ‡Ρ‚ΠΎ UI-сСрвис Π½Π΅ Π·Π½Π°Π΅Ρ‚ ΠΊΠ°ΠΊ ΠΏΡ€Π°Π²ΠΈΠ»ΡŒΠ½ΠΎ Ρ…ΠΎΠ΄ΠΈΡ‚ΡŒ Π² post ΠΈ comment сСрвисы. Π’Π΅Π΄ΡŒ ΠΈΡ… ΠΈΠΌΠ΅Π½Π° Ρ‚Π΅ΠΏΠ΅Ρ€ΡŒ динамичСскиС ΠΈ зависят ΠΎΡ‚ ΠΈΠΌΠ΅Π½ Ρ‡Π°Ρ€Ρ‚ΠΎΠ².

Π’ Dockerfile UI-сСрвиса ΡƒΠΆΠ΅ Π·Π°Π΄Π°Π½Ρ‹ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Π΅ окруТСния. Надо, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΎΠ½ΠΈ ΡƒΠΊΠ°Π·Ρ‹Π²Π°Π»ΠΈ Π½Π° Π½ΡƒΠΆΠ½Ρ‹Π΅ Π±Π΅ΠΊΠ΅Π½Π΄Ρ‹:

ENV POST_SERVICE_HOST post
ENV POST_SERVICE_PORT 5000
ENV COMMENT_SERVICE_HOST comment
ENV COMMENT_SERVICE_PORT 9292

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π² ui/deployments.yaml:

ui/deployments.yaml

...
spec:
...
    env:
    - name: POST_SERVICE_HOST
      value: {{  .Values.postHost | default (printf "%s-post" .Release.Name) }}
    - name: POST_SERVICE_PORT
      value: {{  .Values.postPort | default "5000" | quote }}
    - name: COMMENT_SERVICE_HOST
      value: {{  .Values.commentHost | default (printf "%s-comment" .Release.Name) }}
    - name: COMMENT_SERVICE_PORT
      value: {{  .Values.commentPort | default "9292" | quote }}
# quote - функция для добавлСния ΠΊΠ°Π²Ρ‹Ρ‡Π΅ΠΊ. Для чисСл ΠΈ Π±ΡƒΠ»Π΅Π²Ρ‹Ρ… Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ это Π²Π°ΠΆΠ½ΠΎ
...

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π² ui/values.yaml (ссылка Π½Π° gist)

ui/values.yaml
...
postHost:
postPort:
commentHost:
commentPort: 

Π’Π΅ΠΏΡ€Π΅ΡŒ ΠΌΠΎΠΆΠ½ΠΎ Π·Π°Π΄Π°Π²Π°Ρ‚ΡŒ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Π΅ для зависимостСй прямо Π² values.yaml самого Chart’а reddit. Они ΠΏΠ΅Ρ€Π΅Π·Π°ΠΏΠΈΡΡ‹Π²Π°ΡŽΡ‚ значСния ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… ΠΈΠ· зависимых Ρ‡Π°Ρ€Ρ‚ΠΎΠ²:

comment: # ссылаСмся Π½Π° ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Π΅ Ρ‡Π°Ρ€Ρ‚ΠΎΠ² ΠΈΠ· зависимостСй
  image:
    repository: ozyab/comment
    tag: latest
  service:
    externalPort: 9292

post:
  image:
    repository: ozyab/post
    tag: latest
    service:
      externalPort: 5000

ui:
  image:
    repository: ozyab/ui
    tag: latest
    service:
      externalPort: 9292

ПослС обновлСния UI - Π½ΡƒΠΆΠ½ΠΎ ΠΎΠ±Π½ΠΎΠ²ΠΈΡ‚ΡŒ зависимости Ρ‡Π°Ρ€Ρ‚Π° reddit: $ helm dep update ./reddit

Обновим Ρ€Π΅Π»ΠΈΠ·, установлСнный Π² k8s: $ helm upgrade <release-name> ./reddit

Charts-based app runned

GitLab + Kubernetes

  • Установим Gitlab

Gitlab Π±ΡƒΠ΄Π΅ΠΌ ΡΡ‚Π°Π²ΠΈΡ‚ΡŒ Ρ‚Π°ΠΊΠΆΠ΅ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ Helm Chart’а ΠΈΠ· ΠΏΠ°ΠΊΠ΅Ρ‚Π° Omnibus.

  • Π”ΠΎΠ±Π°Π²ΠΈΠΌ Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ Gitlab $ helm repo add gitlab https://charts.gitlab.io
  • ΠœΡ‹ Π±ΡƒΠ΄Π΅ΠΌ ΠΌΠ΅Π½ΡΡ‚ΡŒ ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΡŽ Gitlab, поэтому скачаСм Chart
$ helm fetch gitlab/gitlab-omnibus --version 0.1.37 --untar
$ cd gitlab-omnibus
  • ΠŸΠΎΠΏΡ€Π°ΠΈΠΌ gitlab-omnibus/values.yaml
baseDomain: example.com
legoEmail: you@example.com
  • Π”ΠΎΠ±Π°Π²ΡŒΡ‚Π΅ Π² gitlab-omnibus/templates/gitlab/gitlab-svc.yaml:
...
    - name: web
      port: 80
      targetPort: workhorse
  • ΠŸΠΎΠΏΡ€Π°Π²ΠΈΡ‚ΡŒ Π² gitlab-omnibus/templates/gitlab-config.yaml:
...
    heritage: "{{ .Release.Service }}"
data:
  external_scheme: http
  external_hostname: {{ template "fullname" . }}
...
  • ΠŸΠΎΠΏΡ€Π°Π²ΠΈΠΌ Π² gitlab-omnibus/templates/ingress/gitlab-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
...
spec:
  tls:
...
  rules:
  - host: {{ template "fullname" . }}
    http:
      paths:
...

Установим GitLab: $ helm install --name gitlab . -f values.yaml

НайдСм Π²Ρ‹Π΄Π°Π½Π½Ρ‹ΠΉ IP-адрСс ingress-ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»Π»Π΅Ρ€Π°: $ kubectl get service -n nginx-ingress nginx

ΠŸΠΎΠΌΠ΅ΡΡ‚ΠΈΠΌ запись Π² Π»ΠΎΠΊΠ°Π»ΡŒΠ½Ρ‹ΠΉ /etc/hosts: # echo "35.184.43.93 gitlab-gitlab staging production" >> /etc/hosts

ИдСм ΠΏΠΎ адрСсу http://gitlab-gitlab ΠΈ ставим собствСнный ΠΏΠ°Ρ€ΠΎΠ»ΡŒ.

Запустим ΠΏΡ€ΠΎΠ΅ΠΊΡ‚

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π³Ρ€ΡƒΠΏΠΏΡƒ ozyab. Π’ настройках Π³Ρ€ΡƒΠΏΠΏΡ‹ Π²Ρ‹Π±Π΅Ρ€Π΅ΠΌ CI/CD ΠΈ настроим ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Π΅ CI_REGISTRY_USER ΠΈ CI_REGISTRY_PASSWORD - Π»ΠΎΠ³ΠΈΠ½ ΠΈ ΠΏΠ°Ρ€ΠΎΠ»ΡŒ ΠΎΡ‚ dockerhub. Π­Ρ‚ΠΈ ΡƒΡ‡Π΅Ρ‚Π½Ρ‹Π΅ Π΄Π°Π½Π½Ρ‹Π΅ Π±ΡƒΠ΄ΡƒΡ‚ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Π½Ρ‹ ΠΏΡ€ΠΈ сборкС ΠΈ Ρ€Π΅Π»ΠΈΠ·Π΅ docker-ΠΎΠ±Ρ€Π°Π·ΠΎΠ² с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ Gitlab CI.

Π’ Π³Ρ€ΡƒΠΏΠΏΠ΅ создадим Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚ reddit-deploy, Π° Ρ‚Π°ΠΊΠΆΠ΅ comment, post ΠΈ ui.

Π›ΠΎΠΊΠ°Π»ΡŒΠ½ΠΎ создадим Ρƒ сСбя Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΡŽ Gitlab_ci со ΡΠ»Π΅Π΄ΡƒΡŽΡ‰Π΅ΠΉ структурой:

Gitlab_ci
β”œβ”€β”€ comment
β”œβ”€β”€ post
β”œβ”€β”€ reddit-deploy
└── ui

ΠŸΠ΅Ρ€Π΅Π½Π΅ΡΠ΅ΠΌ исходныС ΠΊΠΎΠ΄Ρ‹ сСрвисов ΠΈΠ· src/ Π² kubernetes/Gitlab_ci/ui.

Π’ Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΠΈ Gitlab_ci/ui:

  • Π˜Π½ΠΈΡ†ΠΈΠ°Π»ΠΈΠ·ΠΈΡ€ΡƒΠ΅ΠΌ Π»ΠΎΠΊΠ°Π»ΡŒΠ½Ρ‹ΠΉ git-Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ: $ git init
  • Π”ΠΎΠ±Π°Π²ΠΈΠΌ ΡƒΠ΄Π°Π»Π΅Π½Π½Ρ‹ΠΉ Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ $ git remote add origin http://gitlab-gitlab/ozyab/ui.git
  • Π—Π°ΠΊΠΎΠΌΠΌΠΈΡ‚ΠΈΠΌ ΠΈ ΠΎΡ‚ΠΏΡ€Π°Π²ΠΈΠΌ Π² gitlab:
$ git add .
$ git commit -m "init"
$ git push origin master

ΠŸΠ΅Ρ€Π΅Π½Π΅ΡΠ΅ΠΌ содСрТимоС Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΠΈ Charts (ΠΏΠ°ΠΏΠΊΠΈ ui, post, comment, reddit) Π² Gitlab_ci/reddit-deploy ΠΈ Π·Π°ΠΏΡƒΡˆΠΈΠΌ Π² reddit-deploy.

Настроим CI

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Ρ„Π°ΠΉΠ» .gitlab-ci.yml, Π·Π°ΠΏΡƒΡˆΠΈΠΌ Π΅Π³ΠΎ ΠΈ ΠΏΡ€ΠΎΠ²Π΅Ρ€ΠΈΠΌ, Ρ‡Ρ‚ΠΎ сборка ΡƒΡΠΏΠ΅ΡˆΠ½Π°.

Building project UI in giclab-ci

Π’ Ρ‚Π΅ΠΊΡƒΡ‰Π΅ΠΉ ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΠΈ CI выполняСт:

  • Build: Π‘Π±ΠΎΡ€ΠΊΡƒ Π΄ΠΎΠΊΠ΅Ρ€-ΠΎΠ±Ρ€Π°Π·Π° с Ρ‚Π΅Π³ΠΎΠΌ master
  • Test: Π€ΠΈΠΊΡ‚ΠΈΠ²Π½ΠΎΠ΅ тСстированиС
  • Release: Π‘ΠΌΠ΅Π½Ρƒ Ρ‚Π΅Π³Π° с master Π½Π° Ρ‚Π΅Π³ ΠΈΠ· Ρ„Π°ΠΉΠ»Π° VERSION ΠΈ ΠΏΡƒΡˆ docker-ΠΎΠ±Ρ€Π°Π·Π° с Π½ΠΎΠ²Ρ‹ΠΌ Ρ‚Π΅Π³ΠΎΠΌ

Job для выполнСния ΠΊΠ°ΠΆΠ΄ΠΎΠΉ Π·Π°Π΄Π°Ρ‡ΠΈ запускаСтся Π² ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½ΠΎΠΌ Kubernetes POD-Π΅.

Π’Ρ€Π΅Π±ΡƒΠ΅ΠΌΡ‹Π΅ ΠΎΠΏΠ΅Ρ€Π°Ρ†ΠΈΠΈ Π²Ρ‹Π·Ρ‹Π²Π°ΡŽΡ‚ΡΡ Π² Π±Π»ΠΎΠΊΠ°Ρ… script:

 script:
 - setup_docker
 - build 

ОписаниС самих ΠΎΠΏΠ΅Ρ€Π°Ρ†ΠΈΠΉ производится Π² Π²ΠΈΠ΄Π΅ bash-Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΉ Π² Π±Π»ΠΎΠΊΠ΅ .auto_devops:

.auto_devops: &auto_devops |
 function setup_docker() {
…
 }
 function release() {
…
 }
 function build() {
…
}

Для Post ΠΈ Comment Ρ‚Π°ΠΊΠΆΠ΅ Π΄ΠΎΠ±Π°Π²ΠΈΠΌ Π² Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ .gitlab-ci.yml.

Π”Π°Π΄ΠΈΠΌ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡ‚ΡŒ Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚Ρ‡ΠΈΠΊΡƒ Π·Π°ΠΏΡƒΡΠΊΠ°Ρ‚ΡŒ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½ΠΎΠ΅ ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠ΅ Π² Kubernetes ΠΏΠΎ ΠΊΠΎΠΌΠΌΠΈΡ‚Ρƒ Π² feature-Π±Ρ€Π°Π½Ρ‡.

НСмного ΠΎΠ±Π½ΠΎΠ²ΠΈΠΌ ΠΊΠΎΠ½Ρ„ΠΈΠ³ ингрСсса для сСрвиса UI:

reddit-deploy/ui/templates/ingress.yml

...
  name: {{ template "ui.fullname" . }}
  annotations:
    kubernetes.io/ingress.class: {{ .Values.ingress.class }}
spec:
  rules:
  - host: {{ .Values.ingress.host | default .Release.Name }}
    http:
      paths:
      - path: /  # Π’ качСствС ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»Π»Π΅Ρ€Π° nginx, поэтому ΠΏΡ€Π°Π²ΠΈΠ»ΠΎ Π΄Ρ€ΡƒΠ³ΠΎΠ΅
        backend:
          serviceName: {{ template "ui.fullname" . }}
          servicePort: {{ .Values.service.externalPort }}
reddit-deploy/ui/values.yml

...
ingress:
  class: nginx # Π‘ΡƒΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ nginx-ingress, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Π±Ρ‹Π» поставлСн вмСстС с gitlab-ΠΎΠΌ
...            # (Ρ‚Π°ΠΊ быстрСС ΠΈ ΠΏΡ€Π°Π²ΠΈΠ»Π° Π±ΠΎΠ»Π΅Π΅ Π³ΠΈΠ±ΠΊΠΈΠ΅, Ρ‡Π΅ΠΌ Ρƒ GCP)
  • Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π½ΠΎΠ²Ρ‹ΠΉ Π±Ρ€Π°Π½Ρ‡ Π² Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΈ ui $ git checkout -b feature/3
  • Обновим ui/.gitlab-ci.yml
  • Π—Π°ΠΊΠΎΠΌΠΌΠΈΡ‚ΠΈΠΌ ΠΈ Π·Π°ΠΏΡƒΡˆΠΈΠΌ измСнСния:
$ git commit -am "Add review feature"
$ git push origin feature/3

ΠœΡ‹ Π΄ΠΎΠ±Π°Π²ΠΈΠ»ΠΈ ΡΡ‚Π°Π΄ΠΈΡŽ review, Π·Π°ΠΏΡƒΡΠΊΠ°ΡŽΡ‰ΡƒΡŽ ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ Π² k8s ΠΏΠΎ ΠΊΠΎΠΌΠΌΠΈΡ‚Ρƒ Π² feature-Π±Ρ€Π°Π½Ρ‡ΠΈ (Π½Π΅ master):

review:
  stage: review
  script:
    - install_dependencies
    - ensure_namespace
    - install_tiller
    - deploy
  variables:
    KUBE_NAMESPACE: review
    host: $CI_PROJECT_PATH_SLUG-$CI_COMMIT_REF_SLUG
  environment:
    name: review/$CI_PROJECT_PATH/$CI_COMMIT_REF_NAME
    url: http://$CI_PROJECT_PATH_SLUG-$CI_COMMIT_REF_SLUG
  only:
    refs:
      - branches
    kubernetes: active
  except:
    - master

ΠœΡ‹ Π΄ΠΎΠ±Π°Π²ΠΈΠ»ΠΈ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΡŽ deploy, которая Π·Π°Π³Ρ€ΡƒΠΆΠ°Π΅Ρ‚ Chart ΠΈΠ· рСпозитория reddit-deploy ΠΈ Π΄Π΅Π»Π°Π΅Ρ‚ Ρ€Π΅Π»ΠΈΠ· Π² нСймспСйсС review с ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ прилоТСния, собранным Π½Π° стадии build:

  function deploy() {
  ...
    echo "Clone deploy repository..."
    git clone http://gitlab-gitlab/$CI_PROJECT_NAMESPACE/reddit-deploy.git

    echo "Download helm dependencies..."
    helm dep update reddit-deploy/reddit

    echo "Deploy helm release $name to $KUBE_NAMESPACE"
    helm upgrade --install \
      --wait \
      --set ui.ingress.host="$host" \
      --set $CI_PROJECT_NAME.image.tag=$CI_APPLICATION_TAG \
      --namespace="$KUBE_NAMESPACE" \
      --version="$CI_PIPELINE_ID-$CI_JOB_ID" \
      "$name" \
      reddit-deploy/reddit/
  }

МоТСм ΡƒΠ²ΠΈΠ΄Π΅Ρ‚ΡŒ ΠΊΠ°ΠΊΠΈΠ΅ Ρ€Π΅Π»ΠΈΠ·Ρ‹ Π·Π°ΠΏΡƒΡ‰Π΅Π½Ρ‹ helm ls:

NAME                           REVISION   UPDATED                   STATUS    CHART                  NAMESPACE
gitlab                         1          Sun Feb  3 18:07:41 2019  DEPLOYED  gitlab-omnibus-0.1.37  default
review-ozyab-ui-f-92dwpg       1          Sun Feb  3 19:42:54 2019  DEPLOYED  reddit-0.1.0           review

Π‘ΠΎΠ·Π΄Π°Π½Π½Ρ‹Π΅ для Ρ‚Π°ΠΊΠΈΡ… Ρ†Π΅Π»Π΅ΠΉ окруТСния Π²Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹, ΠΈΡ… трСбуСтся β€œΡƒΠ±ΠΈΠ²Π°Ρ‚ΡŒβ€, ΠΊΠΎΠ³Π΄Π° ΠΎΠ½ΠΈ большС Π½Π΅ Π½ΡƒΠΆΠ½Ρ‹. Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π² .gitlab-ci.yml:

stop_review:
  stage: cleanup
  variables:
    GIT_STRATEGY: none
  script:
    - install_dependencies
    - delete
  environment:
    name: review/$CI_PROJECT_PATH/$CI_COMMIT_REF_NAME
    action: stop
  when: manual
  allow_failure: true
  only:
    refs:
      - branches
    kubernetes: active
  except:
    - master

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΡŽ удалСния окруТСния:

function delete() {
    track="${1-stable}"
    name="$CI_ENVIRONMENT_SLUG"
    helm delete "$name" --purge || true
  }

Pepiline of feature/3 branch Π£Π΄Π°Π»ΠΈΠΌ ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠ΅ Π½Π°ΠΆΠ°Π² Π½Π° ΠΊΠ½ΠΎΠΏΠΊΡƒ Π²Ρ‹ΡˆΠ΅. helm ls:

NAME    REVISION   UPDATED                   STATUS    CHART                  NAMESPACE
gitlab  1          Sun Feb  3 18:07:41 2019  DEPLOYED  gitlab-omnibus-0.1.37  default

Π‘ΠΊΠΎΠΏΠΈΡ€ΡƒΠ΅ΠΌ ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹ΠΉ Ρ„Π°ΠΉΠ» .gitlab-ci.yml для ui Π² Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΈ для post ΠΈ comment.

Π’Π΅ΠΏΠ΅Ρ€ΡŒ создадим staging ΠΈ production срСды для Ρ€Π°Π±ΠΎΡ‚Ρ‹ прилоТСния Π² Ρ„Π°ΠΉΠ» reddit-deploy/.gitlab-ci.yml

Π—Π°ΠΏΡƒΡˆΠΈΠΌ Π² Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ reddit-deploy Π²Π΅Ρ‚ΠΊΡƒ master

Π­Ρ‚ΠΎΡ‚ Ρ„Π°ΠΉΠ» отличаСтся ΠΎΡ‚ ΠΏΡ€Π΅Π΄Ρ‹Π΄ΡƒΡ‰ΠΈΡ… Ρ‚Π΅ΠΌ, Ρ‡Ρ‚ΠΎ:

  • НС собираСт docker-ΠΎΠ±Ρ€Π°Π·Ρ‹
  • Π”Π΅ΠΏΠ»ΠΎΠΈΡ‚ Π½Π° статичныС окруТСния (staging ΠΈ production)
  • НС удаляСт окруТСния

Staging environment

Homework 23 (kubernetes-3)

Build Status

Service - абстракции, ΠΎΠΏΡ€Π΅Π΄Π΅Π»ΡΡŽΡ‰Π΅ΠΉ ΠΊΠΎΠ½Π΅Ρ‡Π½Ρ‹Π΅ ΡƒΠ·Π»Ρ‹ доступа (Endpoint’ы) ΠΈ способ ΠΊΠΎΠΌΠΌΡƒΠ½ΠΈΠΊΠ°Ρ†ΠΈΠΈ с Π½ΠΈΠΌΠΈ (nodePort, LoadBalancer, ClusterIP).

Service - опрСдСляСт ΠΊΠΎΠ½Π΅Ρ‡Π½Ρ‹Π΅ ΡƒΠ·Π»Ρ‹ доступа (Endpoint’ы):

  • сСлСкторныС сСрвисы (k8s сам Π½Π°Ρ…ΠΎΠ΄ΠΈΡ‚ POD-Ρ‹ ΠΏΠΎ label’ам)
  • бСзсСлСкторныС сСрвисы (ΠΌΡ‹ Π²Ρ€ΡƒΡ‡Π½ΡƒΡŽ описываСм ΠΊΠΎΠ½ΠΊΡ€Π΅Ρ‚Π½Ρ‹Π΅ endpoint’ы)

ΠΈ способ ΠΊΠΎΠΌΠΌΡƒΠ½ΠΈΠΊΠ°Ρ†ΠΈΠΈ с Π½ΠΈΠΌΠΈ (Ρ‚ΠΈΠΏ (type) сСрвиса):

  • ClusterIP - Π΄ΠΎΠΉΡ‚ΠΈ Π΄ΠΎ сСрвиса ΠΌΠΎΠΆΠ½ΠΎ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΈΠ·Π½ΡƒΡ‚Ρ€ΠΈ кластСра
  • nodePort - ΠΊΠ»ΠΈΠ΅Π½Ρ‚ снаруТи кластСра ΠΏΡ€ΠΈΡ…ΠΎΠ΄ΠΈΡ‚ Π½Π° ΠΎΠΏΡƒΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½Π½Ρ‹ΠΉ ΠΏΠΎΡ€Ρ‚
  • LoadBalancer - ΠΊΠ»ΠΈΠ΅Π½Ρ‚ ΠΏΡ€ΠΈΡ…ΠΎΠ΄ΠΈΡ‚ Π½Π° ΠΎΠ±Π»Π°Ρ‡Π½Ρ‹ΠΉ (aws elb, Google gclb) рСсурс балансировки
  • ExternalName - внСшний рСсурс ΠΏΠΎ ΠΎΡ‚Π½ΠΎΡˆΠ΅Π½ΠΈΡŽ ΠΊ кластСру

Π Π°Π½Π΅Π΅ Π±Ρ‹Π» описан service:

post-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: post
  labels:
    app: reddit
    component: post
spec:
  ports:
  - port: 5000
    protocol: TCP
    targetPort: 5000
  selector:
    app: reddit
    component: post

Π­Ρ‚ΠΎ сСлСкторный сСрвис Ρ‚ΠΈΠΏΠ° ClusetrIP (Ρ‚ΠΈΠΏ Π½Π΅ ΡƒΠΊΠ°Π·Π°Π½, Ρ‚.ΠΊ. этот Ρ‚ΠΈΠΏ ΠΏΠΎ-ΡƒΠΌΠΎΠ»Ρ‡Π°Π½ΠΈΡŽ).

ClusterIP - это Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ (Π² Ρ€Π΅Π°Π»ΡŒΠ½ΠΎΡΡ‚ΠΈ Π½Π΅Ρ‚ интСрфСйса, pod’а ΠΈΠ»ΠΈ ΠΌΠ°ΡˆΠΈΠ½Ρ‹ с Ρ‚Π°ΠΊΠΈΠΌ адрСсом) IP-адрСс ΠΈΠ· Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° адрСсов для Ρ€Π°Π±ΠΎΡ‚Ρ‹ Π²Π½ΡƒΡ‚Ρ€ΠΈ, ΡΠΊΡ€Ρ‹Π²Π°ΡŽΡ‰ΠΈΠΉ Π·Π° собой IP-адрСса Ρ€Π΅Π°Π»ΡŒΠ½Ρ‹Ρ… POD-ΠΎΠ². БСрвису любого Ρ‚ΠΈΠΏΠ° (ΠΊΡ€ΠΎΠΌΠ΅ ExternalName) назначаСтся этот IP-адрСс.

$ kubectl get services -n dev

output

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
comment      ClusterIP   10.11.248.46    <none>        9292/TCP         2d
comment-db   ClusterIP   10.11.252.202   <none>        27017/TCP        2d
mongo        ClusterIP   10.11.243.229   <none>        27017/TCP        2d
post         ClusterIP   10.11.254.105   <none>        5000/TCP         2d
post-db      ClusterIP   10.11.240.87    <none>        27017/TCP        2d
ui           NodePort    10.11.245.7     <none>        9292:32092/TCP   2d

Π‘Ρ…Π΅ΠΌΠ° взаимодСйствия: Service interaction scheme

Service - это лишь абстракция ΠΈ описаниС Ρ‚ΠΎΠ³ΠΎ, ΠΊΠ°ΠΊ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΡ‚ΡŒ доступ ΠΊ сСрвису. Но опираСтся ΠΎΠ½Π° Π½Π° Ρ€Π΅Π°Π»ΡŒΠ½Ρ‹Π΅ ΠΌΠ΅Ρ…Π°Π½ΠΈΠ·ΠΌΡ‹ ΠΈ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρ‹: DNS-сСрвСр, балансировщики, iptables.

Для Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Π΄ΠΎΠΉΡ‚ΠΈ Π΄ΠΎ сСрвиса, Π½Π°ΠΌ Π½ΡƒΠΆΠ½ΠΎ ΡƒΠ·Π½Π°Ρ‚ΡŒ Π΅Π³ΠΎ адрСс ΠΏΠΎ ΠΈΠΌΠ΅Π½ΠΈ. Kubernetes Π½Π΅ ΠΈΠΌΠ΅Π΅Ρ‚ своСго собствСнного DNS-сСрвСра для Ρ€Π°Π·Ρ€Π΅ΡˆΠ΅Π½ΠΈΡ ΠΈΠΌΠ΅Π½. ΠŸΠΎΡΡ‚ΠΎΠΌΡƒ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ ΠΏΠ»Π°Π³ΠΈΠ½ kube-dns (это Ρ‚ΠΎΠΆΠ΅ Pod).

Π•Π³ΠΎ Π·Π°Π΄Π°Ρ‡ΠΈ:

  • Ρ…ΠΎΠ΄ΠΈΡ‚ΡŒ Π² API Kubernetes’a ΠΈ ΠΎΡ‚ΡΠ»Π΅ΠΆΠΈΠ²Π°Ρ‚ΡŒ Service-ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρ‹
  • Π·Π°Π½ΠΎΡΠΈΡ‚ΡŒ DNS-записи ΠΎ Service’ах Π² ΡΠΎΠ±ΡΡ‚Π²Π΅Π½Π½ΡƒΡŽ Π±Π°Π·Ρƒ
  • ΠΏΡ€Π΅Π΄ΠΎΡΡ‚Π°Π²Π»ΡΡ‚ΡŒ DNS-сСрвис для Ρ€Π°Π·Ρ€Π΅ΡˆΠ΅Π½ΠΈΡ ΠΈΠΌΠ΅Π½ Π² IP-адрСса (ΠΊΠ°ΠΊ Π²Π½ΡƒΡ‚Ρ€Π΅Π½Π½ΠΈΡ…, Ρ‚Π°ΠΊ ΠΈ Π²Π½Π΅ΡˆΠ½ΠΈΡ…)

Service interaction scheme with kude-dns

ΠŸΡ€ΠΈ ΠΎΡ‚ΠΊΠ»ΡŽΡ‡Π΅Π½Π½ΠΎΠΌ kube-dns сСрвисС ΡΠ²ΡΠ·Π½ΠΎΡΡ‚ΡŒ ΠΌΠ΅ΠΆΠ΄Ρƒ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Π°ΠΌΠΈ reddit-app ΠΏΡ€ΠΎΠΏΠ°Π΄Π΅Ρ‚ ΠΈ ΠΎΠ½ пСрСстанСт Ρ€Π°Π±ΠΎΡ‚Π°Ρ‚ΡŒ.

ΠŸΡ€ΠΎΡΠΊΠ΅ΠΉΠ»ΠΈΠΌ Π² 0 сСрвис, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ слСдит, Ρ‡Ρ‚ΠΎΠ±Ρ‹ dns-kube ΠΏΠΎΠ΄ΠΎΠ² всСгда Ρ…Π²Π°Ρ‚Π°Π»ΠΎ. Аналогично поступим с kube-dns:

$ kubectl scale deployment --replicas 0 -n kube-system kube-dns-autoscaler
$ kubectl scale deployment --replicas 0 -n kube-system kube-dns

Π’Ρ‹ΠΏΠΎΠ»Π½ΠΈΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρƒ:

kubectl exec -ti -n dev post-8ff9c4cb9-h4zpq ping comment

output:

ping: bad address 'comment'
command terminated with exit code 1

Π’Π΅Ρ€Π½Π΅ΠΌ автоскСллСр:

kubectl scale deployment --replicas 1 -n kube-system kube-dns-autoscaler

ClusterIP - Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ ΠΈ Π½Π΅ ΠΏΡ€ΠΈΠ½Π°Π΄Π»Π΅ΠΆΠΈΡ‚ Π½ΠΈ ΠΎΠ΄Π½ΠΎΠΉ Ρ€Π΅Π°Π»ΡŒΠ½ΠΎΠΉ физичСской сущности. Π•Π³ΠΎ Ρ‡Ρ‚Π΅Π½ΠΈΠ΅ΠΌ ΠΈ дальнСйшими дСйствиями с ΠΏΠ°ΠΊΠ΅Ρ‚Π°ΠΌΠΈ, ΠΏΡ€ΠΈΠ½Π°Π΄Π»Π΅ΠΆΠ°Ρ‰ΠΈΠΌΠΈ Π΅ΠΌΡƒ, занимаСтся Π² нашСм случаС iptables, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ настраиваСтся ΡƒΡ‚ΠΈΠ»ΠΈΡ‚ΠΎΠΉ kube-proxy (Π·Π°Π±ΠΈΡ€Π°ΡŽΡ‰Π΅ΠΉ ΠΈΠ½Ρ„Ρƒ с API-сСрвСра).

Π‘Π°ΠΌ kube-proxy, ΠΌΠΎΠΆΠ½ΠΎ Π½Π°ΡΡ‚Ρ€ΠΎΠΈΡ‚ΡŒ Π½Π° ΠΏΡ€ΠΈΠ΅ΠΌ Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠ°, Π½ΠΎ это ΡƒΡΡ‚Π°Ρ€Π΅Π²ΡˆΠ΅Π΅ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ ΠΈ Π½Π΅ рСкомСндуСтся Π΅Π³ΠΎ ΠΏΡ€ΠΈΠΌΠ΅Π½ΡΡ‚ΡŒ.

Service interaction scheme using kube-dns

НСзависимо ΠΎΡ‚ Ρ‚ΠΎΠ³ΠΎ, Π½Π° ΠΎΠ΄Π½ΠΎΠΉ Π½ΠΎΠ΄Π΅ находятся ΠΏΠΎΠ΄Ρ‹ ΠΈΠ»ΠΈ Π½Π° Ρ€Π°Π·Π½Ρ‹Ρ… - Ρ‚Ρ€Π°Ρ„ΠΈΠΊ ΠΏΡ€ΠΎΡ…ΠΎΠ΄ΠΈΡ‚ Ρ‡Π΅Ρ€Π΅Π· Ρ†Π΅ΠΏΠΎΡ‡ΠΊΡƒ, ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½Π½ΡƒΡŽ Π²Ρ‹ΡˆΠ΅. Kubernetes Π½Π΅ ΠΈΠΌΠ΅Π΅Ρ‚ Π² ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡ‚Π΅ ΠΌΠ΅Ρ…Π°Π½ΠΈΠ·ΠΌΠ° ΠΎΡ€Π³Π°Π½ΠΈΠ·Π°Ρ†ΠΈΠΈ overlay сСтСй (ΠΊΠ°ΠΊ Ρƒ Docker Swarm). Он лишь прСдоставляСт интСрфСйс для этого. Для создания Overlay-сСтСй ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡŽΡ‚ΡΡ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹Π΅ Π°Π΄Π΄ΠΎΠ½Ρ‹: Weave, Calico, Flannel, … .

Π’ Google Kontainer Engine (GKE) ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ собствСнный ΠΏΠ»Π°Π³ΠΈΠ½ kubenet (ΠΎΠ½ - Ρ‡Π°ΡΡ‚ΡŒ kubelet). Он Ρ€Π°Π±ΠΎΡ‚Π°Π΅Ρ‚ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ вмСстС с ΠΏΠ»Π°Ρ‚Ρ„ΠΎΡ€ΠΌΠΎΠΉ GCP ΠΈ, ΠΏΠΎ-сути занимаСтся Ρ‚Π΅ΠΌ, Ρ‡Ρ‚ΠΎ настраиваСт google-сСти для ΠΏΠ΅Ρ€Π΅Π΄Π°Ρ‡ΠΈ Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠ° Kubernetes. ΠŸΠΎΡΡ‚ΠΎΠΌΡƒ Π² ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΠΈ Docker сСйчас Π²Ρ‹ Π½Π΅ ΡƒΠ²ΠΈΠ΄ΠΈΡ‚Π΅ Π½ΠΈΠΊΠ°ΠΊΠΈΡ… Overlay-сСтСй.

Service interaction scheme using kubenet

  • NodePort - ΠΏΠΎΡ…ΠΎΠΆ Π½Π° сСрвис Ρ‚ΠΈΠΏΠ° ClusterIP, Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΊ Π½Π΅ΠΌΡƒ прибавляСтся ΠΏΡ€ΠΎΡΠ»ΡƒΡˆΠΈΠ²Π°Π½ΠΈΠ΅ ΠΏΠΎΡ€Ρ‚ΠΎΠ² Π½ΠΎΠ΄ (всСх Π½ΠΎΠ΄) для доступа ΠΊ сСрвисам снаруТи. ΠŸΡ€ΠΈ этом ClusterIP Ρ‚Π°ΠΊΠΆΠ΅ назначаСтся этому сСрвису для доступа ΠΊ Π½Π΅ΠΌΡƒ ΠΈΠ·Π½ΡƒΡ‚Ρ€ΠΈ кластСра. kube-proxy ΠΏΡ€ΠΎΡΠ»ΡƒΡˆΠΈΠ²Π°Π΅Ρ‚ΡΡ Π»ΠΈΠ±ΠΎ Π·Π°Π΄Π°Π½Π½Ρ‹ΠΉ ΠΏΠΎΡ€Ρ‚ (nodePort: 32092), Π»ΠΈΠ±ΠΎ ΠΏΠΎΡ€Ρ‚ ΠΈΠ· Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° 30000-32670. Π”Π°Π»ΡŒΡˆΠ΅ IPTables Ρ€Π΅ΡˆΠ°Π΅Ρ‚, Π½Π° ΠΊΠ°ΠΊΠΎΠΉ Pod ΠΏΠΎΠΏΠ°Π΄Π΅Ρ‚ Ρ‚Ρ€Π°Ρ„ΠΈΠΊ.

БСрвис UI Ρ€Π°Π½ΡŒΡˆΠ΅ ΡƒΠΆΠ΅ Π±Ρ‹Π» ΠΎΠΏΡƒΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½ Π½Π°Ρ€ΡƒΠΆΡƒ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ NodePort:

ui-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: ui
  labels:
    app: reddit
    component: ui
spec:
  type: NodePort
  ports:
  - port: 9292
    nodePort: 32092
    protocol: TCP
    targetPort: 9292
  selector:
    app: reddit
    component: ui

Service interaction scheme using nodePort

  • LoadBalancer

NodePort Ρ…ΠΎΡ‚ΡŒ ΠΈ прСдоставляСт доступ ΠΊ сСрвису снаруТи, Π½ΠΎ ΠΎΡ‚ΠΊΡ€Ρ‹Π²Π°Ρ‚ΡŒ всС ΠΏΠΎΡ€Ρ‚Ρ‹ Π½Π°Ρ€ΡƒΠΆΡƒ ΠΈΠ»ΠΈ ΠΈΡΠΊΠ°Ρ‚ΡŒ IPадрСса Π½Π°ΡˆΠΈΡ… Π½ΠΎΠ΄ (ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Π΅ Π²ΠΎΠΎΠ±Ρ‰Π΅ динамичСскиС) Π½Π΅ ΠΎΡ‡Π΅Π½ΡŒ ΡƒΠ΄ΠΎΠ±Π½ΠΎ.

Π’ΠΈΠΏ LoadBalancer позволяСт Π½Π°ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ внСшний ΠΎΠ±Π»Π°Ρ‡Π½Ρ‹ΠΉ балансировщик Π½Π°Π³Ρ€ΡƒΠ·ΠΊΠΈ ΠΊΠ°ΠΊ Π΅Π΄ΠΈΠ½ΡƒΡŽ Ρ‚ΠΎΡ‡ΠΊΡƒ Π²Ρ…ΠΎΠ΄Π° Π² наши сСрвисы, Π° Π½Π΅ ΠΏΠΎΠ»Π°Π³Π°Ρ‚ΡŒΡΡ Π½Π° IPTables ΠΈ Π½Π΅ ΠΎΡ‚ΠΊΡ€Ρ‹Π²Π°Ρ‚ΡŒ Π½Π°Ρ€ΡƒΠΆΡƒ вСсь кластСр.

Service interaction scheme using loadBalancer

Настроим ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΠΌ ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ Service UI:

ui-service.yml 

---
apiVersion: v1
kind: Service
metadata:
  name: ui
  labels:
    app: reddit
    component: ui
spec:
  type: LoadBalancer
  ports:
  - port: 80  # ΠŸΠΎΡ€Ρ‚, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Π±ΡƒΠ΄Π΅Ρ‚ ΠΎΡ‚ΠΊΡ€Ρ‹Ρ‚ Π½Π° балансировщикС
    nodePort: 32092 #Π’Π°ΠΊΠΆΠ΅ Π½Π° Π½ΠΎΠ΄Π΅ Π±ΡƒΠ΄Π΅Ρ‚ ΠΎΡ‚ΠΊΡ€Ρ‹Ρ‚ ΠΏΠΎΡ€Ρ‚, Π½ΠΎ Π½Π°ΠΌ ΠΎΠ½ Π½Π΅ Π½ΡƒΠΆΠ΅Π½ ΠΈ Π΅Π³ΠΎ ΠΌΠΎΠΆΠ½ΠΎ Π΄Π°ΠΆΠ΅ ΡƒΠ±Ρ€Π°Ρ‚ΡŒ
    protocol: TCP
    targetPort: 9292 #ΠŸΠΎΡ€Ρ‚ POD-Π°
  selector:
    app: reddit
    component: ui

ΠŸΡ€ΠΈΠΌΠ΅Π½ΠΈΠΌ измСнСния: $ kubectl apply -f ui-service.yml -n dev

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΈΠΌ: $ kubectl get service -n dev --selector component=ui

output

NAME   TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)        AGE
ui     LoadBalancer   10.11.245.7   35.222.133.XXX   80:32092/TCP   2d

Balancing rule in GCP Network services

Балансировка с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ Service Ρ‚ΠΈΠΏΠ° LoadBalancing ΠΈΠΌΠ΅Π΅Ρ‚ ряд нСдостатков:

  • нСльзя ΡƒΠΏΡ€Π°Π²Π»ΡΡ‚ΡŒ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ http URI (L7-балансировка)
  • ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡŽΡ‚ΡΡ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΎΠ±Π»Π°Ρ‡Π½Ρ‹Π΅ балансировщики (AWS, GCP)
  • Π½Π΅Ρ‚ Π³ΠΈΠ±ΠΊΠΈΡ… ΠΏΡ€Π°Π²ΠΈΠ» Ρ€Π°Π±ΠΎΡ‚Ρ‹ с Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠΎΠΌ
  • Ingress

Для Π±ΠΎΠ»Π΅Π΅ ΡƒΠ΄ΠΎΠ±Π½ΠΎΠ³ΠΎ управлСния входящим снаруТи Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠΎΠΌ ΠΈ Ρ€Π΅ΡˆΠ΅Π½ΠΈΡ нСдостатков LoadBalancer ΠΌΠΎΠΆΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Π΄Ρ€ΡƒΠ³ΠΎΠΉ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ Kubernetes - Ingress.

Ingress – это Π½Π°Π±ΠΎΡ€ ΠΏΡ€Π°Π²ΠΈΠ» Π²Π½ΡƒΡ‚Ρ€ΠΈ кластСра Kubernetes, ΠΏΡ€Π΅Π΄Π½Π°Π·Π½Π°Ρ‡Π΅Π½Π½Ρ‹Ρ… для Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ входящиС ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ ΠΌΠΎΠ³Π»ΠΈ Π΄ΠΎΡΡ‚ΠΈΡ‡ΡŒ сСрвисов (Services).

Π‘Π°ΠΌΠΈ ΠΏΠΎ сСбС Ingress’ы это просто ΠΏΡ€Π°Π²ΠΈΠ»Π°. Для ΠΈΡ… примСнСния Π½ΡƒΠΆΠ΅Π½ Ingress Controller.

  • Ingress Conroller

Π’ ΠΎΡ‚Π»ΠΈΡ‡ΠΈΠ΅ ΠΎΡΡ‚Π°Π»ΡŒΠ½Ρ‹Ρ… ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»Π»Π΅Ρ€ΠΎΠ² k8s - ΠΎΠ½ Π½Π΅ стартуСт вмСстС с кластСром.

Ingress Controller - это скорСС ΠΏΠ»Π°Π³ΠΈΠ½ (Π° Π·Π½Π°Ρ‡ΠΈΡ‚ ΠΈ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹ΠΉ POD), ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ состоит ΠΈΠ· 2-Ρ… Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½Ρ‹Ρ… частСй:

  • ΠŸΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅, ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠ΅ отслСТиваСт Ρ‡Π΅Ρ€Π΅Π· k8s API Π½ΠΎΠ²Ρ‹Π΅ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρ‹ Ingress ΠΈ обновляСт ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΡŽ балансировщика
  • Балансировщик (Nginx, haproxy, traefik,…), ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ ΠΈ занимаСтся ΡƒΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠ΅ΠΌ сСтСвым Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠΎΠΌ

ΠžΡΠ½ΠΎΠ²Π½Ρ‹Π΅ Π·Π°Π΄Π°Ρ‡ΠΈ, Ρ€Π΅ΡˆΠ°Π΅ΠΌΡ‹Π΅ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ Ingress’ов:

  • ΠžΡ€Π³Π°Π½ΠΈΠ·Π°Ρ†ΠΈΡ Π΅Π΄ΠΈΠ½ΠΎΠΉ Ρ‚ΠΎΡ‡ΠΊΠΈ Π²Ρ…ΠΎΠ΄Π° Π² прилоТСния снаруТи
  • ΠžΠ±Π΅ΡΠΏΠ΅Ρ‡Π΅Π½ΠΈΠ΅ балансировки Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠ°
  • ВСрминация SSL
  • Π’ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ хостинг Π½Π° основС ΠΈΠΌΠ΅Π½ ΠΈ Ρ‚.Π΄

Посколько Ρƒ нас web-ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅, Π½Π°ΠΌ Π²ΠΏΠΎΠ»Π½Π΅ Π±Ρ‹Π»ΠΎ Π±Ρ‹ Π»ΠΎΠ³ΠΈΡ‡Π½ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ L7-балансировщик вмСсто Service LoadBalancer.

Google Π² GKE ΡƒΠΆΠ΅ прСдоставляСт Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡ‚ΡŒ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ ΠΈΡ… собствСнныС Ρ€Π΅ΡˆΠ΅Π½ΠΈΡ балансирощик Π² качСствС Ingress controller-ΠΎΠ².

УбСдимся, Ρ‡Ρ‚ΠΎ встроСнный Ingress Π²ΠΊΠ»ΡŽΡ‡Π΅Π½: Service interaction scheme using kubenet

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Ingress для сСрвиса UI:

ui-ingress.yml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ui
spec:
  backend:
    serviceName: ui
    servicePort: 80

Π­Ρ‚ΠΎ Singe Service Ingress - Π·Π½Π°Ρ‡ΠΈΡ‚, Ρ‡Ρ‚ΠΎ вСсь ingress ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»Π»Π΅Ρ€ Π±ΡƒΠ΄Π΅Ρ‚ просто Π±Π°Π»Π°Π½ΡΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ Π½Π°Π³Ρ€ΡƒΠ·ΠΊΡƒ Π½Π° Node-Ρ‹ для ΠΎΠ΄Π½ΠΎΠ³ΠΎ сСрвиса (ΠΎΡ‡Π΅Π½ΡŒ ΠΏΠΎΡ…ΠΎΠΆΠ΅ Π½Π° Service LoadBalancer).

$ kubectl apply -f ui-ingress.yml -n dev

New load balancer rule

ΠŸΠΎΡΠΌΠΎΡ‚Ρ€ΠΈΠΌ Π² кластСр: kubectl get ingress -n dev

output

NAME   HOSTS   ADDRESS         PORTS   AGE
ui     *       35.201.126.XXX   80      2m

Π’Π΅ΠΏΠ΅Ρ€ΡŒ схСма выглядит Ρ‚Π°ΠΊ:

Service interaction scheme using kubenet

Π’ Ρ‚Π΅ΠΊΡƒΡ‰Π΅ΠΉ схСмС Π΅ΡΡ‚ΡŒ нСсколько нСдостатков:

  • Ρƒ нас 2 балансировщика для 1 сСрвиса
  • ΠœΡ‹ Π½Π΅ ΡƒΠΌΠ΅Π΅ΠΌ ΡƒΠΏΡ€Π°Π²Π»ΡΡ‚ΡŒ Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠΎΠΌ Π½Π° ΡƒΡ€ΠΎΠ²Π½Π΅ HTTP

Один ΠΈΠ· балансирощиков ΠΌΠΎΠΆΠ½ΠΎ ΡƒΠ±Ρ€Π°Ρ‚ΡŒ. Обновим сСрвис UI:

ui-service.yml

---
apiVersion: v1
kind: Service
metadata:
  ...
spec:
  type: NodePort #Π·Π°ΠΌΠ΅Π½ΠΈΠΌ Π½Π° NodePort
  ports:
  - port: 9292
    protocol: TCP
    targetPort: 9292
  selector:
    app: reddit
    component: ui

$ kubectl apply -f … -n dev для примСнСния настроСк.

Заставим Ρ€Π°Π±ΠΎΡ‚Π°Ρ‚ΡŒ Ingress Controller ΠΊΠ°ΠΊ классичСский Π²Π΅Π±:

ui-ingress.yml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ui
spec:
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: ui
          servicePort: 9292

$ kubectl apply -f ui-ingress.yml -n dev для примСнСния

Reddit app started using ingress balabcer based on HTTP proto

  • Secret

Π—Π°Ρ‰ΠΈΡ‚ΠΈΠΌ наш сСрвис с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ TLS. НайдСм наш IP: $ kubectl get ingress -n dev

output

NAME   HOSTS   ADDRESS         PORTS   AGE
ui     *       35.201.126.86   80      1d

ΠŸΠΎΠ΄Π³ΠΎΡ‚ΠΎΠ²ΠΈΠΌ сСртификат ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡ IP ΠΊΠ°ΠΊ CN:

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=35.201.126.86"

Π—Π°Π³Ρ€ΡƒΠ·ΠΊΠ° сСртификата Π² кластСр:

$ kubectl create secret tls ui-ingress --key tls.key --cert tls.crt -n dev

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΊΠ° наличия сСртификата:

$ kubectl describe secret ui-ingress -n dev

output

Name:         ui-ingress
Namespace:    dev
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  989 bytes
tls.key:  1704 bytes

Настроим Ingress Π½Π° ΠΏΡ€ΠΈΠ΅ΠΌ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ HTTPS Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠ°:

ui-ingress.yml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ui
  annotations:
    kubernetes.io/ingress.allow-http: "false" #ΠΎΡ‚ΠΊΠ»ΡŽΡ‡Π°Π΅ΠΌ проброс http
spec:
  tls:
  - secretName: ui-ingress #ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π°Π΅ΠΌ сСртификат
  backend:
    serviceName: ui
    servicePort: 9292

$ kubectl apply -f ui-ingress.yml -n dev для примСнСния.

ΠŸΠ΅Ρ€Π΅ΠΉΠ΄Π΅ΠΌ Π½Π° страницу load-balancer'Π°:

Service interaction scheme using kubenet

Π’ΠΈΠ΄ΠΈΠΌ, Ρ‡Ρ‚ΠΎ Ρƒ нас всС Π΅Ρ‰Π΅ http load balacer. Π’Ρ€ΡƒΡ‡Π½ΡƒΡŽ ΡƒΠ΄Π°Π»ΠΈΠΌ ΠΈ создадим load balancer:

$ kubectl delete ingress ui -n dev
$ kubectl apply -f ui-ingress.yml -n dev
  • Network Policy

Π Π°Π½Π΅Π΅ ΠΌΡ‹ приняли ΡΠ»Π΅Π΄ΡƒΡŽΡ‰ΡƒΡŽ схСму сСтСй сСрвисов:

Service network scheme

Π’ Kubernetes Ρƒ нас Ρ‚Π°ΠΊ ΡΠ΄Π΅Π»Π°Ρ‚ΡŒ Π½Π΅ получится с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹Ρ… сСтСй, Ρ‚Π°ΠΊ ΠΊΠ°ΠΊ всС POD-Ρ‹ ΠΌΠΎΠ³ΡƒΡ‚ Π΄ΠΎΡΡ‚ΡƒΡ‡Π°Ρ‚ΡŒΡΡ Π΄Ρ€ΡƒΠ³ Π΄ΠΎ Π΄Ρ€ΡƒΠ³Π° ΠΏΠΎ-ΡƒΠΌΠΎΠ»Ρ‡Π°Π½ΠΈΡŽ.

NetworkPolicy - инструмСнт для Π΄Π΅ΠΊΠ»Π°Ρ€Π°Ρ‚ΠΈΠ²Π½ΠΎΠ³ΠΎ описания ΠΏΠΎΡ‚ΠΎΠΊΠΎΠ² Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠ°. НС всС сСтСвыС ΠΏΠ»Π°Π³ΠΈΠ½Ρ‹ ΠΏΠΎΠ΄Π΄Π΅Ρ€ΠΆΠΈΠ²Π°ΡŽΡ‚ ΠΏΠΎΠ»ΠΈΡ‚ΠΈΠΊΠΈ сСти. Π’ частности, Ρƒ GKE эта функция ΠΏΠΎΠΊΠ° Π² Beta-тСстС ΠΈ для Π΅Ρ‘ Ρ€Π°Π±ΠΎΡ‚Ρ‹ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½ΠΎ Π±ΡƒΠ΄Π΅Ρ‚ Π²ΠΊΠ»ΡŽΡ‡Π΅Π½ сСтСвой ΠΏΠ»Π°Π³ΠΈΠ½ Calico (вмСсто Kubenet).

Π”Π°Π²Π°ΠΉΡ‚Π΅ Π΅Π΅ протСструСм. Наша Π·Π°Π΄Π°Ρ‡Π° - ΠΎΠ³Ρ€Π°Π½ΠΈΡ‡ΠΈΡ‚ΡŒ Ρ‚Ρ€Π°Ρ„ΠΈΠΊ, ΠΏΠΎΡΡ‚ΡƒΠΏΠ°ΡŽΡ‰ΠΈΠΉ Π½Π° mongodb ΠΎΡ‚ΠΎΠ²ΡΡŽΠ΄Ρƒ, ΠΊΡ€ΠΎΠΌΠ΅ сСрвисов post ΠΈ comment.

НайдСм имя кластСра:

$ gcloud beta container clusters list

output

NAME                LOCATION       MASTER_VERSION  MASTER_IP     MACHINE_TYPE  NODE_VERSION    NUM_NODES  STATUS
standard-cluster-1  us-central1-a  1.10.11-gke.1   35.202.73.52  g1-small      1.10.9-gke.5 *  2          RUNNING

Π’ΠΊΠ»ΡŽΡ‡ΠΈΠΌ network-policy для GKE:

gcloud beta container clusters update standard-cluster-1 --zone=us-central1-a --update-addons=NetworkPolicy=ENABLED
gcloud beta container clusters update standard-cluster-1 --zone=us-central1-a  --enable-network-policy

Боздадим network policy для mongo:

mongo-network-policy.yml

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-db-traffic
  labels:
    app: reddit
spec:
  podSelector:    # Π²Ρ‹Π±ΠΈΡ€Π°Π΅ΠΌ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρ‹ ΠΏΠΎΠ»ΠΈΡ‚ΠΈΠΊΠΈ (POD'Ρ‹ с mongodb)
    matchLabels:
      app: reddit
      component: mongo
  policyTypes:    # Π±Π»ΠΎΠΊ Π·Π°ΠΏΡ€Π΅Ρ‰Π°ΡŽΡ‰ΠΈΡ… Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠΉ: Π—Π°ΠΏΡ€Π΅Ρ‰Π°Π΅ΠΌ всС входящиС ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ
  - Ingress       # Π˜ΡΡ…ΠΎΠ΄ΡΡ‰ΠΈΠ΅ Ρ€Π°Π·Ρ€Π΅ΡˆΠ΅Π½Ρ‹
  ingress:        # Π±Π»ΠΎΠΊ Ρ€Π°Π·Ρ€Π΅ΡˆΠ°ΡŽΡ‰ΠΈΡ… Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠΉ
  - from:         # (Π±Π΅Π»Ρ‹ΠΉ список)
    - podSelector:    
        matchLabels:
          app: reddit         # Π Π°Π·Ρ€Π΅ΡˆΠ°Π΅ΠΌ всС входящиС ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ ΠΎΡ‚
          component: comment  # POD-ов с label-ами comment

ΠŸΡ€ΠΈΠΌΠ΅Π½ΡΠ΅ΠΌ ΠΏΠΎΠ»ΠΈΡ‚ΠΈΠΊΡƒ: $ kubectl apply -f mongo-network-policy.yml -n dev

Для доступа post-сСрвиса Π² Π±Π°Π·Ρƒ Π΄Π°Π½Π½Ρ‹Ρ… Π΄ΠΎΠ±Π°Π²ΠΈΠΌ:

mongo-network-policy.yml
...
- podSelector:
  matchLabels:
    app: reddit
    component: post
  • Π₯Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π΅ для Π±Π°Π· Π΄Π°Π½Π½Ρ‹Ρ…

Основной Stateful сСрвис Π² нашСм ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΈ - это Π±Π°Π·Π° Π΄Π°Π½Π½Ρ‹Ρ… MongoDB. Π’ Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ‚ ΠΎΠ½Π° запускаСтся Π² Π²ΠΈΠ΄Π΅ Deployment ΠΈ Ρ…Ρ€Π°Π½ΠΈΡ‚ Π΄Π°Π½Π½Ρ‹Π΅ Π² стаднартный Docker Volume-Π°Ρ…. Π­Ρ‚ΠΎ ΠΈΠΌΠ΅Π΅Ρ‚ нСсколько ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌ:

  • ΠΏΡ€ΠΈ ΡƒΠ΄Π°Π»Π΅Π½ΠΈΠΈ POD-Π° удаляСтся ΠΈ Volume
  • потСря Nod’ы с mongo Π³Ρ€ΠΎΠ·ΠΈΡ‚ ΠΏΠΎΡ‚Π΅Ρ€Π΅ΠΉ Π΄Π°Π½Π½Ρ‹Ρ…
  • запуск Π±Π°Π·Ρ‹ Π½Π° Π΄Ρ€ΡƒΠ³ΠΎΠΉ Π½ΠΎΠ΄Π΅ запускаСт Π½ΠΎΠ²Ρ‹ΠΉ экзСмпляр Π΄Π°Π½Π½Ρ‹Ρ…
mongo-deployment.yml

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mongo
...
    spec:
      containers:
      - image: mongo:3.2
        name: mongo
        volumeMounts:   # ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π°Π΅ΠΌ Volume
        - name: mongo-persistent-storage
          mountPath: /data/db
      volumes:
      - name: mongo-persistent-storage # объявляСм Volume
        emptyDir: {}

БСйчас ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ Ρ‚ΠΈΠΏ Volume emptyDir. ΠŸΡ€ΠΈ создании ΠΏΠΎΠ΄Π° с Ρ‚Π°ΠΊΠΈΠΌ Ρ‚ΠΈΠΏΠΎΠΌ просто создаСтся пустой docker volume. ΠŸΡ€ΠΈ остановкС POD’a содСрТимоС emtpyDir удалится навсСгда. Π₯отя Π² ΠΎΠ±Ρ‰Π΅ΠΌ случаС ΠΏΠ°Π΄Π΅Π½ΠΈΠ΅ POD’a Π½Π΅ Π²Ρ‹Π·Ρ‹Π²Π°Π΅Ρ‚ удалСния Volume’a. ВмСсто Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Ρ…Ρ€Π°Π½ΠΈΡ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅ локально Π½Π° Π½ΠΎΠ΄Π΅, ΠΈΠΌΠ΅Π΅Ρ‚ смысл ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡ΠΈΡ‚ΡŒ ΡƒΠ΄Π°Π»Π΅Π½Π½ΠΎΠ΅ Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π΅. Π’ нашСм случаС ΠΌΠΎΠΆΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Volume gcePersistentDisk, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Π±ΡƒΠ΄Π΅Ρ‚ ΡΠΊΠ»Π°Π΄Ρ‹Π²Π°Ρ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅ Π² Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π΅ GCE.

Боздадим диск в Google Cloud:

$ gcloud compute disks create --size=25GB --zone=us-central1-a reddit-mongo-disk

Π”ΠΎΠ±Π°Π²ΠΈΠΌ Π½ΠΎΠ²Ρ‹ΠΉ Volume POD-Ρƒ Π±Π°Π·Ρ‹:

mongo-deployment.yml

---
apiVersion: apps/v1beta1
kind: Deployment
    ...
    spec:
      containers:
      - image: mongo:3.2
        name: mongo
        volumeMounts:
        - name: mongo-gce-pd-storage
          mountPath: /data/db
      volumes:
      - name: mongo-persistent-storage
        emptyDir: {}
        volumes:
      - name: mongo-gce-pd-storage
        gcePersistentDisk:
          pdName: reddit-mongo-disk # мСняСм Volume Π½Π° Π΄Ρ€ΡƒΠ³ΠΎΠΉ Ρ‚ΠΈΠΏ
          fsType: ext4

ΠœΠΎΠ½Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΉ диск ΠΊ POD'Ρƒ mongo:

Scheme of mounting dedidcated disk to mongo's pod

kubectl apply -f mongo-deployment.yml -n dev

ДоТдСмся пСрСсоздания POD'a (ΠΌΠΎΠΆΠ΅Ρ‚ Π·Π°Π½ΡΡ‚ΡŒ Π΄ΠΎ 10 ΠΌΠΈΠ½ΡƒΡ‚).

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ пост:

New posts in redit app using dedicated disk in mongodb

ΠŸΠ΅Ρ€Π΅ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ mongo-deployment:

$ kubectl delete deploy mongo -n dev
$ kubectl apply -f mongo-deployment.yml -n dev

ΠŸΠΎΡΡ‚Ρ‹ останутся Π½Π° мСстС.

  • PersistentVolume

Π˜ΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌΡ‹ΠΉ ΠΌΠ΅Ρ…Π°Π½ΠΈΠ·ΠΌ Volume-ΠΎΠ² ΠΌΠΎΠΆΠ½ΠΎ ΡΠ΄Π΅Π»Π°Ρ‚ΡŒ ΡƒΠ΄ΠΎΠ±Π½Π΅Π΅. ΠœΡ‹ ΠΌΠΎΠΆΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Π½Π΅ Ρ†Π΅Π»Ρ‹ΠΉ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΉ диск для ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Π°, Π° Ρ†Π΅Π»Ρ‹ΠΉ рСсурс Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π°, ΠΎΠ±Ρ‰ΠΈΠΉ для всСго кластСра. Π’ΠΎΠ³Π΄Π° ΠΏΡ€ΠΈ запускС Stateful-Π·Π°Π΄Π°Ρ‡ Π² кластСрС, ΠΌΡ‹ смоТСм Π·Π°ΠΏΡ€ΠΎΡΠΈΡ‚ΡŒ Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π΅ Π² Π²ΠΈΠ΄Π΅ Ρ‚Π°ΠΊΠΎΠ³ΠΎ ΠΆΠ΅ рСсурса, ΠΊΠ°ΠΊ CPU ΠΈΠ»ΠΈ опСративная ΠΏΠ°ΠΌΡΡ‚ΡŒ. Для этого Π±ΡƒΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ ΠΌΠ΅Ρ…Π°Π½ΠΈΠ·ΠΌ PersistentVolume.

ОписаниС PersistentVolume:

mongo-volume.yml

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: reddit-mongo-disk # Имя PersistentVolume'а
spec:
  capacity:
    storage: 25Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  gcePersistentDisk:
    fsType: "ext4" 
    pdName: "reddit-mongo-disk" # Имя диска в GCE

Π”ΠΎΠ±Π°Π²ΠΈΠΌ PersistentVolume Π² кластСр: $ kubectl apply -f mongo-volume.yml -n dev

ΠœΡ‹ создали PersistentVolume Π² Π²ΠΈΠ΄Π΅ диска Π² GCP:

Persistent Volume as a disk in GCP

  • PersistentVolumeClaim

ΠœΡ‹ создали рСсурс дискового Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π°, распространСнный Π½Π° вСсь кластСр, Π² Π²ΠΈΠ΄Π΅ PersistentVolume. Π§Ρ‚ΠΎΠ±Ρ‹ Π²Ρ‹Π΄Π΅Π»ΠΈΡ‚ΡŒ ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡŽ Ρ‡Π°ΡΡ‚ΡŒ Ρ‚Π°ΠΊΠΎΠ³ΠΎ рСсурса - Π½ΡƒΠΆΠ½ΠΎ ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ запрос Π½Π° Π²Ρ‹Π΄Π°Ρ‡Ρƒ - PersistentVolumeClaim. Claim - это ΠΈΠΌΠ΅Π½Π½ΠΎ запрос, Π° Π½Π΅ само Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π΅.

Π‘ ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ запроса ΠΌΠΎΠΆΠ½ΠΎ Π²Ρ‹Π΄Π΅Π»ΠΈΡ‚ΡŒ мСсто ΠΊΠ°ΠΊ ΠΈΠ· ΠΊΠΎΠ½ΠΊΡ€Π΅Ρ‚Π½ΠΎΠ³ΠΎ PersistentVolume (Ρ‚ΠΎΠ³Π΄Π° ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Ρ‹ accessModes ΠΈ StorageClass Π΄ΠΎΠ»ΠΆΠ½Ρ‹ ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΡ‚Π²ΠΎΠ²Π°Ρ‚ΡŒ, Π° мСста Π΄ΠΎΠ»ΠΆΠ½ΠΎ Ρ…Π²Π°Ρ‚Π°Ρ‚ΡŒ), Ρ‚Π°ΠΊ ΠΈ просто ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹ΠΉ PersistentVolume ΠΏΠΎΠ΄ ΠΊΠΎΠ½ΠΊΡ€Π΅Ρ‚Π½Ρ‹ΠΉ запрос.

Боздадим описаниС PersistentVolumeClaim (PVC):

mongo-claim.yml

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-pvc  # Имя PersistentVolumeClame'а
spec:
  accessModes:
    - ReadWriteOnce # accessMode Ρƒ PVC ΠΈ Ρƒ PV Π΄ΠΎΠ»ΠΆΠ΅Π½ ΡΠΎΠ²ΠΏΠ°Π΄Π°Ρ‚ΡŒ
  resources:
    requests:
      storage: 15Gi

ΠŸΡ€ΠΈΠΌΠ΅Π½ΠΈΠΌ: kubectl apply -f mongo-claim.yml -n dev

ΠœΡ‹ Π²Ρ‹Π΄Π΅Π»ΠΈΠ»ΠΈ мСсто Π² PV ΠΏΠΎ запросу для нашСй Π±Π°Π·Ρ‹. ΠžΠ΄Π½ΠΎΠ²Ρ€Π΅ΠΌΠ΅Π½Π½ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ ΠΎΠ΄ΠΈΠ½ PV ΠΌΠΎΠΆΠ½ΠΎ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΏΠΎ ΠΎΠ΄Π½ΠΎΠΌΡƒ Claim’у.

Persistant Volume Claim inside Persistant Volume

Если Claim Π½Π΅ Π½Π°ΠΉΠ΄Π΅Ρ‚ ΠΏΠΎ Π·Π°Π΄Π°Π½Π½Ρ‹ΠΌ ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Π°ΠΌ PV Π²Π½ΡƒΡ‚Ρ€ΠΈ кластСра, Π»ΠΈΠ±ΠΎ Ρ‚ΠΎΡ‚ Π±ΡƒΠ΄Π΅Ρ‚ занят Π΄Ρ€ΡƒΠ³ΠΈΠΌ Claim’ом Ρ‚ΠΎ ΠΎΠ½ сам создаст Π½ΡƒΠΆΠ½Ρ‹ΠΉ Π΅ΠΌΡƒ PV воспользовавшись стандартным StorageClass.

$ kubectl describe storageclass standard -n dev 

output

Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.beta.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/gce-pd
Parameters:            type=pd-standard
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

Π’ нашСм случаС это ΠΎΠ±Ρ‹Ρ‡Π½Ρ‹ΠΉ ΠΌΠ΅Π΄Π»Π΅Π½Π½Ρ‹ΠΉ Google Cloud Persistent Drive: Persistant Volume Claim created standart (slow hdd) volume

ΠŸΠΎΠ΄ΠΊΠ»ΡŽΡ‡ΠΈΠΌ PVC ΠΊ нашим Pod'Π°ΠΌ:

mongo-deployment.yml 

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mongo
...
    spec:
      containers:
      - image: mongo:3.2
        name: mongo
        volumeMounts:
        - name: mongo-gce-pd-storage
          mountPath: /data/db
      volumes:
      - name: mongo-gce-pd-storage # Имя PersistentVolumeClame'а
        persistentVolumeClaim:
          claimName: mongo-pvc

ΠŸΡ€ΠΈΠΌΠ΅Π½ΠΈΠΌ: $ kubectl apply -f mongo-deployment.yml -n dev

ΠœΠΎΠ½Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½ΠΎΠ΅ ΠΏΠΎ PVC Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π΅ ΠΊ POD’у mongo: Mounting PVC dedicated storage to mongo's POD

  • ДинамичСскоС Π²Ρ‹Π΄Π΅Π»Π΅Π½ΠΈΠ΅ Volume'ΠΎΠ²

Π‘ΠΎΠ·Π΄Π°Π² PersistentVolume ΠΌΡ‹ ΠΎΡ‚Π΄Π΅Π»ΠΈΠ»ΠΈ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ "Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π°" ΠΎΡ‚ Π½Π°ΡˆΠΈΡ… Service'ΠΎΠ² ΠΈ Pod'ΠΎΠ². Π’Π΅ΠΏΠ΅Ρ€ΡŒ ΠΌΡ‹ ΠΌΠΎΠΆΠ΅ΠΌ Π΅Π³ΠΎ ΠΏΡ€ΠΈ нСобходимости ΠΏΠ΅Ρ€Π΅ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ.

Но Π½Π°ΠΌ Π³ΠΎΡ€Π°Π·Π΄ΠΎ интСрСснСС ΡΠΎΠ·Π΄Π°Π²Π°Ρ‚ΡŒ Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π° ΠΏΡ€ΠΈ нСобходимости ΠΈ Π² автоматичСском Ρ€Π΅ΠΆΠΈΠΌΠ΅. Π’ этом Π½Π°ΠΌ ΠΏΠΎΠΌΠΎΠ³ΡƒΡ‚ StorageClass’ы. Они ΠΎΠΏΠΈΡΡ‹Π²Π°ΡŽΡ‚ Π³Π΄Π΅ (ΠΊΠ°ΠΊΠΎΠΉ ΠΏΡ€ΠΎΠ²Π°ΠΉΠ΄Π΅Ρ€) ΠΈ ΠΊΠ°ΠΊΠΈΠ΅ Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π° ΡΠΎΠ·Π΄Π°ΡŽΡ‚ΡΡ.

CΠΎΠ·Π΄Π°Π΄ΠΈΠΌ StorageClass Fast Ρ‚Π°ΠΊ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΌΠΎΠ½Ρ‚ΠΈΡ€ΠΎΠ²Π°Π»ΠΈΡΡŒ SSD-диски для Ρ€Π°Π±ΠΎΡ‚Ρ‹ нашСго Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π°:

storage-fast.yml

---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: fast # Имя StorageClass'а
provisioner: kubernetes.io/gce-pd # ΠŸΡ€ΠΎΠ²Π°ΠΉΠ΄Π΅Ρ€ Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π°
parameters:
  type: pd-ssd # Π’ΠΈΠΏ прСдоставляСмого Ρ…Ρ€Π°Π½ΠΈΠ»ΠΈΡ‰Π°

Π”ΠΎΠ±Π°Π²ΠΈΠΌ StorageClass Π² кластСр: $ kubectl apply -f storage-fast.yml -n dev

  • PVC + StorageClass

Боздадим описаниС PersistentVolumeClaim:

mongo-claim-dynamic.yml 

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-pvc-dynamic
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast # ВмСсто ссылки Π½Π° созданный диск, Ρ‚Π΅ΠΏΠ΅Ρ€ΡŒ 
  resources:               # ΠΌΡ‹ ссылаСмся Π½Π° StorageClass
    requests:
      storage: 10Gi

Π”ΠΎΠ±Π°Π²ΠΈΠΌ StorageClass Π² кластСр:

$ kubectl apply -f mongo-claim-dynamic.yml -n dev

ΠŸΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅ динамичСского PVC:

mongo-deployment.yml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mongo
...
spec:
      containers:
      - image: mongo:3.2
        name: mongo
        volumeMounts:
        - name: mongo-gce-pd-storage
          mountPath: /data/db
      volumes:
      - name: mongo-gce-pd-storage
        persistentVolumeClaim:
          claimName: mongo-pvc-dynamic #Обновим PersistentVolumeClaim

Обновим описаниС нашСго Deployment'а: $ kubectl apply -f mongo-deployment.yml -n dev

Бписок ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Ρ… PersistentVolume'ΠΎΠ²:

$ kubectl get persistentvolume -n dev

output

NAME                     CAPACITY  ACCESS MODES  RECLAIM POLICY   STATUS      CLAIM                   STORAGECLASS  AGE
pvc-4aa55cd3-2256-1..a   15Gi      RWO           Delete           Bound       dev/mongo-pvc           standard      23m
pvc-dcb0edd0-2258-1..a   10Gi      RWO           Delete           Bound       dev/mongo-pvc-dynamic   fast          5m
reddit-mongo-disk        25Gi      RWO           Retain           Available                                         28m
  • Status - статус PV ΠΏΠΎ ΠΎΡ‚Π½ΠΎΡˆΠ΅Π½ΠΈΡŽ ΠΊ Pod'Π°ΠΌ ΠΈ Claim'Π°ΠΌ (Bound - связанный, Availible - доступный)
  • Claim - ΠΊ ΠΊΠ°ΠΊΠΎΠΌΡƒ Claim'Ρƒ привязан Π΄Π°Π½Π½Ρ‹ΠΉ PV
  • StorageClass - StorageClass Π΄Π°Π½Π½ΠΎΠ³ΠΎ PV

Connecting dynamic PVC

Homework 22 (kubernetes-2)

Build Status

  • Для локальной Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΊΠΈ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ:
  • kubectl
  • дирСктория ~/.kube
  • minikube:
brew cask install minikube

ΠΈΠ»ΠΈ

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.27.0/
minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/
bin/

Для OS X понадобится Π³ΠΈΠΏΠ΅Ρ€Π²ΠΈΠ·ΠΎΡ€ xhyve driver, VirtualBox, ΠΈΠ»ΠΈ VMware Fusion.

  • Запуск Minicube-кластСра: minikube start

Если Π½ΡƒΠΆΠ½Π° конкрСтная вСрсия kubernetes, слСдуСт ΡƒΠΊΠ°Π·Ρ‹Π²Π°Ρ‚ΡŒ Ρ„Π»Π°Π³ --kubernetes-version <version> (v1.8.0)

По-ΡƒΠΌΠΎΠ»Ρ‡Π°Π½ΠΈΡŽ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ VirtualBox. Если ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ Π΄Ρ€ΡƒΠ³ΠΎΠΉ Π³ΠΈΠΏΠ΅Ρ€Π²ΠΈΠ·ΠΎΡ€, Ρ‚ΠΎ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌ Ρ„Π»Π°Π³ --vm-driver=<hypervisor

  • Minikube-кластСр Ρ€Π°Π·Π²Π΅Ρ€Π½ΡƒΡ‚. АвтоматичСски Π±Ρ‹Π» настроСн ΠΊΠΎΠ½Ρ„ΠΈΠ³ kubectl.

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΈΠΌ: kubectl get nodes

output:

NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   25s   v1.13.2
  • ΠœΠ°Π½ΠΈΡ„Π΅ΡΡ‚ kubernetes Π² Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π΅ yml:
~/.kube/config

apiVersion: v1
clusters: ## список кластСров
- cluster:
    certificate-authority: ~/.minikube/ca.crt
    server: https://192.168.99.100:8443
  name: minikube
contexts: ## список контСкстов
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users: ## список ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»Π΅ΠΉ
- name: minikube
  user:
    client-certificate: ~/.minikube/client.crt
    client-key: ~/.minikube/client.key
  • ΠžΠ±Ρ‹Ρ‡Π½Ρ‹ΠΉ порядок конфигурирования kubectl:
  1. Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ cluster'a:
$ kubectl config set-cluster … cluster_name
  1. Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Π΄Π°Π½Π½Ρ‹Ρ… ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»Ρ (credentials):
$ kubectl config set-credentials … user_name
  1. Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ контСкста:
$ kubectl config set-context context_name \
  --cluster=cluster_name \
  --user=user_name
  1. ИспользованиС контСкста:
$ kubectl config use-context context_name

Π’Π°ΠΊΠΈΠΌ ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ, kubectl конфигурируСтся для ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ ΠΊ Ρ€Π°Π·Π½Ρ‹ΠΌ кластСрам, ΠΏΠΎΠ΄ Ρ€Π°Π·Π½Ρ‹ΠΌΠΈ ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»ΡΠΌΠΈ.

Π’Π΅ΠΊΡƒΡ‰ΠΈΠΉ контСкст: $ kubectl config current-context

output

minikube

Бписок всСх контСкстов: $ kubectl config get-contexts

  • ΠžΡΠ½ΠΎΠ²Π½Ρ‹Π΅ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρ‹ - это рСсурсы Deployment

ΠžΡΠ½ΠΎΠ²Π½Ρ‹Π΅ Π·Π°Π΄Π°Ρ‡ΠΈ Deployment:

  1. Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ReplicationSet (слСдит, Ρ‡Ρ‚ΠΎΠ±Ρ‹ число Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½Ρ‹Ρ… Pod-ΠΎΠ² соотвСтствовало описанному)
  2. Π’Π΅Π΄Π΅Π½ΠΈΠ΅ истории вСрсий Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½Ρ‹Ρ… Pod-ΠΎΠ² (для Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹Ρ… стратСгий дСплоя, для возмоТностСй ΠΎΡ‚ΠΊΠ°Ρ‚Π°)
  3. ОписаниС процСсса дСплоя (стратСгия, ΠΏΠ°Ρ€Π°ΠΌΠ΅Ρ‚Ρ€Ρ‹ стратСгий)
  • Π€Π°ΠΉΠ» kubernetes/reddit/ui-deployment.yml:
kubernetes/reddit/ui-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: ui
  labels:
    app: reddit
    component: ui
spec:
  replicas: 3
  selector:         ## selector описываСт, ΠΊΠ°ΠΊ Π΅ΠΌΡƒ ΠΎΡ‚ΡΠ»Π΅ΠΆΠΈΠ²Π°Ρ‚ΡŒ POD-Ρ‹.
    matchLabels:    ## Π’ Π΄Π°Π½Π½ΠΎΠΌ случаС - ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»Π»Π΅Ρ€ Π±ΡƒΠ΄Π΅Ρ‚ ΡΡ‡ΠΈΡ‚Π°Ρ‚ΡŒ POD-Ρ‹ с ΠΌΠ΅Ρ‚ΠΊΠ°ΠΌΠΈ:
      app: reddit   ## app=reddit ΠΈ component=ui
      component: ui
  template:
    metadata:
      name: ui-pod
      labels:       ## ΠŸΠΎΡΡ‚ΠΎΠΌΡƒ Π²Π°ΠΆΠ½ΠΎ Π² описании POD-Π° Π·Π°Π΄Π°Ρ‚ΡŒ
        app: reddit ## Π½ΡƒΠΆΠ½Ρ‹Π΅ ΠΌΠ΅Ρ‚ΠΊΠΈ (labels) 
        component: ui
    spec:
      containers:
      - image: ozyab/ui
        name: ui
  • Запуск Π² Minikube ui-ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹:
$ kubectl apply -f ui-deployment.yml

output

deployment "ui" created

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΊΠ° Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½Ρ‹Ρ… deployment'ΠΎΠ²:

$ kubectl get deployment 

output

NAME   READY   UP-TO-DATE   AVAILABLE   AGE
ui     3/3     3            3           2m27s

kubectl apply -f <filename> ΠΌΠΎΠΆΠ΅Ρ‚ ΠΏΡ€ΠΈΠ½ΠΈΠΌΠ°Ρ‚ΡŒ Π½Π΅ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹ΠΉ Ρ„Π°ΠΉΠ», Π½ΠΎ ΠΈ ΠΏΠ°ΠΏΠΊΡƒ с Π½ΠΈΠΌΠΈ. НапримСр:

$ kubectl apply -f ./kubernetes/reddit 
  • Π˜ΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡ selector, Π½Π°ΠΉΠ΄Π΅ΠΌ POD-Ρ‹ прилоТСния:
$ kubectl get pods --selector component=ui 

output

NAME                  READY   STATUS    RESTARTS   AGE
ui-84994b4554-5m4cb   1/1     Running   0          3m20s
ui-84994b4554-7gnqf   1/1     Running   0          3m25s
ui-84994b4554-zhfr6   1/1     Running   0          4m48s

ΠŸΡ€ΠΎΠ±Ρ€ΠΎΡ ΠΏΠΎΡ€Ρ‚Π° Π½Π° pod:

$ kubectl port-forward <pod-name> 8080:9292

output

Forwarding from 127.0.0.1:8080 -> 9292
Forwarding from [::1]:8080 -> 9292

ПослС этого ΠΌΠΎΠΆΠ½ΠΎ ΠΏΠ΅Ρ€Π΅ΠΉΡ‚ΠΈ ΠΏΠΎ адрСсу http://127.0.0.1:8080 Microservices Reddit in ui container

  • Π€Π°ΠΉΠ» kubernetes/reddit/comment-deployment.yml:
kubernetes/reddit/comment-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: comment
  labels:
    app: reddit
    component: comment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: reddit
      component: comment
  template:
    metadata:
      name: comment
      labels:
        app: reddit
        component: comment
    spec:
      containers:
      - image: ozyab/comment # ΠœΠ΅Π½ΡΠ΅Ρ‚ΡΡ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ имя ΠΎΠ±Ρ€Π°Π·Π°
        name: comment
  • Запрос созданных ΠΏΠΎΠ΄ΠΎΠ²:
$ kubectl get pods --selector component=comment

Π’Ρ‹ΠΏΠΎΠ»Π½ΠΈΠ² проброс ΠΏΠΎΡ€Ρ‚Π° Π² pod, ΠΈ пСрСйдя ΠΏΠΎ адрСсу http://127.0.0.1:8080/healthcheck ΡƒΠ²ΠΈΠ΄ΠΈΠΌ:

Comment component healthcheck

  • Π€Π°ΠΉΠ» kubernetes/reddit/post-deployment.yml:
kubernetes/reddit/post-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: post
  labels:
    app: reddit
    component: post
spec:
  replicas: 3
  selector:
    matchLabels:
      app: reddit
      component: post
  template:
    metadata:
      name: post-pod
      labels:
        app: reddit
        component: post
    spec:
      containers:
      - image: ozyab/post
        name: post

ΠŸΡ€ΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ deployment: kubectl apply -f post-deployment.yml

Post component healthcheck

  • Π€Π°ΠΉΠ» kubernetes/reddit/mongo-deployment.yml:
kubernetes/reddit/mongo-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: mongo
  labels:
    app: reddit
    component: mongo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reddit
      component: mongo
  template:
    metadata:
      name: mongo
      labels:
        app: reddit
        component: mongo
    spec:
      containers:
      - image: mongo:3.2
        name: mongo
        volumeMounts:   #Ρ‚ΠΎΡ‡ΠΊΠ° монтирования Π² ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π΅ (Π½Π΅ Π² POD-Π΅)
        - name: mongo-persistent-storage
          mountPath: /data/db
      volumes:   #АссоциированныС с POD-ΠΎΠΌ Volume-Ρ‹
      - name: mongo-persistent-storage
        emptyDir: {}
  • Для связи ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚ ΠΌΠ΅ΠΆΠ΄Ρƒ собой ΠΈ с внСшним ΠΌΠΈΡ€ΠΎΠΌ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ Service - абстракция, которая опрСдСляСт Π½Π°Π±ΠΎΡ€ POD-ΠΎΠ² (Endpoints) ΠΈ способ доступа ΠΊ Π½ΠΈΠΌ.

Для связи ui с post ΠΈ comment Π½ΡƒΠΆΠ½ΠΎ ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ ΠΈΠΌ ΠΏΠΎ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρƒ Service.

Π€Π°ΠΉΠ» kubernetes/reddit/comment-service.yml:

kubernetes/reddit/comment-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: comment  # Π² DNS появится запись для comment
  labels:
    app: reddit
    component: comment
spec:
  ports:           # ΠŸΡ€ΠΈ ΠΎΠ±Ρ€Π°Ρ‰Π΅Π½ΠΈΠΈ Π½Π° адрСс post:9292 ΠΈΠ·Π½ΡƒΡ‚Ρ€ΠΈ любого ΠΈΠ· POD-ΠΎΠ²
  - port: 9292      # Ρ‚Π΅ΠΊΡƒΡ‰Π΅Π³ΠΎ namespace нас ΠΏΠ΅Ρ€Π΅ΠΏΡ€Π°Π²ΠΈΡ‚ Π½Π° 9292-Π½Ρ‹ΠΉ
    protocol: TCP    # ΠΏΠΎΡ€Ρ‚ ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΈΠ· POD-ΠΎΠ² прилоТСния post,
    targetPort: 9292  # Π²Ρ‹Π±Ρ€Π°Π½Π½Ρ‹Ρ… ΠΏΠΎ label-Π°ΠΌ
  selector:
    app: reddit
    component: comment

ПослС примСнСния comment-service.yml Π½Π°ΠΉΠ΄Π΅ΠΌ ΠΏΠΎ label'Π°ΠΌ ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΠ΅ PODΡ‹:

$ kubectl describe service comment | grep Endpoints

output

Endpoints:   172.17.0.4:9292,172.17.0.6:9292,172.17.0.9:9292

Π’Ρ‹ΠΏΠΎΠ»Π½ΠΈΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρƒ nslookup comment ΠΈΠ· ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π° post:

$ kubectl get pods --selector component=post
NAME                    READY   STATUS    RESTARTS   AGE
post-5c45f6d5c8-5dpx7   1/1     Running   0          17m
post-5c45f6d5c8-cb8fv   1/1     Running   0          17m
post-5c45f6d5c8-k9s5h   1/1     Running   0          17m
$ kubectl exec -ti post-5c45f6d5c8-5dpx7 nslookup comment
nslookup: can't resolve '(null)': Name does not resolve
Name:      comment
Address 1: 10.105.95.41 comment.default.svc.cluster.local

Π’ΠΈΠ΄ΠΈΠΌ, Ρ‡Ρ‚ΠΎ ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½ ΠΎΡ‚Π²Π΅Ρ‚ ΠΎΡ‚ DNS.

  • Аналогичным ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ Ρ€Π°Π·Π²Π΅Ρ€Π½Π΅ΠΌ service для post:
kubernetes/reddit/post-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: post 
  labels:
    app: reddit
    component: post
spec:
  ports:
  - port: 9292
    protocol: TCP
    targetPort: 9292
  selector:
    app: reddit
    component: post
  • Post ΠΈ Comment Ρ‚Π°ΠΊΠΆΠ΅ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡŽΡ‚ mongodb, ΡΠ»Π΅Π΄ΠΎΠ²Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ Π΅ΠΉ Ρ‚ΠΎΠΆΠ΅ Π½ΡƒΠΆΠ΅Π½ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ Service:
kubernetes/reddit/mongodb-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: reddit
    component: mongo
spec:
  ports:
  - port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    app: reddit
    component: mongo

Π”Π΅ΠΏΠ»ΠΎΠΉ:

kubectl apply -f mongodb-service.yml
  • ΠŸΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΈΡ‰Π΅Ρ‚ адрСс comment_db, Π° Π½Π΅ mongodb. Аналогично ΠΈ сСрвис comment ΠΈΡ‰Π΅Ρ‚ post_db.

Π­Ρ‚ΠΈ адрСса Π·Π°Π΄Π°Π½Ρ‹ Π² ΠΈΡ… Dockerfile-Π°Ρ… Π² Π²ΠΈΠ΄Π΅ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… окруТСния:

post/Dockerfile
…
ENV POST_DATABASE_HOST=post_db
comment/Dockerfile
…
ENV COMMENT_DATABASE_HOST=comment_db

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ сСрвис:

comment-mongodb-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: comment-db
  labels:
    app: reddit
    component: mongo
    comment-db: "true" # ΠΌΠ΅Ρ‚ΠΊΠ°, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Ρ€Π°Π·Π»ΠΈΡ‡Π°Ρ‚ΡŒ сСрвисы
spec:
  ports:
  - port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    app: reddit
    component: mongo
    comment-db: "true" # ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹ΠΉ Π»Π΅ΠΉΠ±Π» для comment-db

Π€Π°ΠΉΠ» mongo-deployment.yml:

kubernetes/reddit/mongo-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: mongo
  labels:
  ...
    comment-db: "true"
...
  template:
    metadata:
      name: mongo
      labels:
        ...
        comment-db: "true"

Π€Π°ΠΉΠ» comment-deployment.yml:

kubernetes/reddit/comment-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: comment
  labels:
    app: reddit
    component: comment
...
    spec:
      containers:
      - image: ozyab/comment
        name: comment
        env:
        - name: COMMENT_DATABASE_HOST
          value: comment-db

Π’Π°ΠΊΠΆΠ΅ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ ΠΎΠ±Π½ΠΎΠ²ΠΈΡ‚ΡŒ Ρ„Π°ΠΉΠ» mongo-deployment.yml, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Π½ΠΎΠ²Ρ‹ΠΉ Service смог Π½Π°ΠΉΡ‚ΠΈ Π½ΡƒΠΆΠ½Ρ‹ΠΉ POD:

kubernetes/reddit/mongo-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: mongo
  labels:
    ..
    comment-db: "true"
  template:
    metadata:
      name: mongo
      labels:
        ..
        comment-db: "true"
  • НСобходимо ΠΎΠ±Π΅ΡΠΏΠ΅Ρ‡ΠΈΡ‚ΡŒ доступ ΠΊ ui-сСрвису снаруТи. Для этого Π½Π°ΠΌ понадобится Service для UI-ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹ ui-service.yml:
kubernetes/reddit/ui-service.yml
...
  spec:
  - nodePort: 32092 #ΠΌΠΎΠΆΠ½ΠΎ Π·Π°Π΄Π°Ρ‚ΡŒ свой ΠΏΠΎΡ€Ρ‚ ΠΈΠ· Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° 30000-32767
    type: NodePort

Π’ описании service:

  • NodePort - для доступа снаруТи кластСра
  • port - для доступа ΠΊ сСрвису ΠΈΠ·Π½ΡƒΡ‚Ρ€ΠΈ кластСра

Команда minikube service ui ΠΎΡ‚ΠΊΡ€ΠΎΠ΅Ρ‚ Π² Π±Ρ€Π°ΡƒΠ·Π΅Ρ€Π΅ страницу сСрвиса.

Бписок всСх сСрвисов с URL: minikube services list

Namespace

Namespace - это "Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ" кластСр Kubernetes Π²Π½ΡƒΡ‚Ρ€ΠΈ самого Kubernetes. Π’Π½ΡƒΡ‚Ρ€ΠΈ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ Ρ‚Π°ΠΊΠΎΠ³ΠΎ кластСра находятся свои ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρ‹ (POD-Ρ‹, Service-Ρ‹, Deployment-Ρ‹ ΠΈ Ρ‚.Π΄.), ΠΊΡ€ΠΎΠΌΠ΅ ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ΠΎΠ², ΠΎΠ±Ρ‰ΠΈΡ… Π½Π° всС namespace-Ρ‹ (nodes, ClusterRoles, PersistentVolumes).

Π’ Ρ€Π°Π·Π½Ρ‹Ρ… namespace-Π°Ρ… ΠΌΠΎΠ³ΡƒΡ‚ находится ΠΎΠ±ΡŠΠ΅ΠΊΡ‚Ρ‹ с ΠΎΠ΄ΠΈΠ½Π°ΠΊΠΎΠ²Ρ‹ΠΌ ΠΈΠΌΠ΅Π½Π΅ΠΌ, Π½ΠΎ Π² Ρ€Π°ΠΌΠΊΠ°Ρ… ΠΎΠ΄Π½ΠΎΠ³ΠΎ namespace ΠΈΠΌΠ΅Π½Π° ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ΠΎΠ² Π΄ΠΎΠ»ΠΆΠ½Ρ‹ Π±Ρ‹Ρ‚ΡŒ ΡƒΠ½ΠΈΠΊΠ°Π»ΡŒΠ½Ρ‹.

ΠŸΡ€ΠΈ стартС Kubernetes кластСр ΡƒΠΆΠ΅ ΠΈΠΌΠ΅Π΅Ρ‚ 3 namespace:

  • default - для ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ΠΎΠ² для ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Ρ… Π½Π΅ ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½ Π΄Ρ€ΡƒΠ³ΠΎΠΉ Namespace (Π² Π½Π΅ΠΌ ΠΌΡ‹ Ρ€Π°Π±ΠΎΡ‚Π°Π»ΠΈ всС это врСмя)
  • kube-system - для ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ΠΎΠ² созданных Kubernetes’ом ΠΈ для управлСния ΠΈΠΌ
  • kube-public - для ΠΎΠ±ΡŠΠ΅ΠΊΡ‚ΠΎΠ² ΠΊ ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΌ Π½ΡƒΠΆΠ΅Π½ доступ ΠΈΠ· любой Ρ‚ΠΎΡ‡ΠΊΠΈ кластСра Для Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Π²Ρ‹Π±Ρ€Π°Ρ‚ΡŒ ΠΊΠΎΠ½ΠΊΡ€Π΅Ρ‚Π½ΠΎΠ΅ пространство ΠΈΠΌΠ΅Π½, Π½ΡƒΠΆΠ½ΠΎ ΡƒΠΊΠ°Π·Π°Ρ‚ΡŒ Ρ„Π»Π°Π³ -n <namespace> ΠΈΠ»ΠΈ --namespace <namespace> ΠΏΡ€ΠΈ запускС kubectl
  • ΠžΡ‚Π΄Π΅Π»ΠΈΠΌ срСду для Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΊΠΈ прилоТСния ΠΎΡ‚ всСго ΠΎΡΡ‚Π°Π»ΡŒΠ½ΠΎΠ³ΠΎ кластСра, для Ρ‡Π΅Π³ΠΎ создадим свой Namespace dev:
dev-namespace.yml: 

---
apiVersion: v1
kind: Namespace
metadata:
  name: dev

Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ namespace dev: $ kubectl apply -f dev-namespace.yml

  • Π”ΠΎΠ±Π°Π²ΠΈΠΌ ΠΈΠ½Ρ„Ρƒ ΠΎΠ± ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠΈ Π²Π½ΡƒΡ‚Ρ€ΡŒ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π° UI:
kubernetes/reddit/ui-deployment.yml

---
apiVersion: apps/v1beta2
kind: Deployment
...
    spec:
      containers:
      ...
        env:
        - name: ENV #ИзвлСкаСм значСния ΠΈΠ· контСкста запуска
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

ПослС этого: $ kubectl apply -f ui-deployment.yml -n dev

Π Π°Π·Π²ΠΎΡ€Π°Ρ‡ΠΈΠ²Π°Π΅ΠΌ Kubernetes Π² GKE (Google Kubernetes Engine)

ΠŸΠ΅Ρ€Π΅Ρ…ΠΎΠ΄ΠΈΠΌ Π½Π° страницу Kubernetes Engine: https://console.cloud.google.com/kubernetes/list?project=${PROJECT_NAME} ΠΈ создаСм кластСр.

ΠšΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ‹ управлСния кластСром Π·Π°ΠΏΡƒΡΠΊΠ°ΡŽΡ‚ΡΡ Π² container engine ΠΈ ΡƒΠΏΡ€Π°Π²Π»ΡΡŽΡ‚ΡΡ Google:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
  • etcd Рабочая Π½Π°Π³Ρ€ΡƒΠ·ΠΊΠ° (собствСнныС POD-Ρ‹), Π°Π΄Π΄ΠΎΠ½Ρ‹, ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³, Π»ΠΎΠ³ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ Ρ‚.Π΄. Π·Π°ΠΏΡƒΡΠΊΠ°ΡŽΡ‚ΡΡ Π½Π° Ρ€Π°Π±ΠΎΡ‡ΠΈΡ… Π½ΠΎΠ΄Π°Ρ…. Π Π°Π±ΠΎΡ‡ΠΈΠ΅ Π½ΠΎΠ΄Ρ‹ - стандартныС Π½ΠΎΠ΄Ρ‹ Google compute engine. Π˜Ρ… ΠΌΠΎΠΆΠ½ΠΎ ΡƒΠ²ΠΈΠ΄Π΅Ρ‚ΡŒ Π² спискС Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½Ρ‹Ρ… ΡƒΠ·Π»ΠΎΠ².

ΠŸΠΎΠ΄ΠΊΠ»ΡŽΡ‡ΠΈΠΌΡΡ ΠΊ GKE для запуска нашСго прилоТСния. Для этого ΠΆΠΌΠ΅ΠΌ ΠŸΠΎΠ΄ΠΊΠ»ΡŽΡ‡ΠΈΡ‚ΡŒΡΡ Π½Π° страницС кластСров. Π‘ΡƒΠ΄Π΅Ρ‚ Π²Ρ‹Π΄Π°Π½Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°

gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project ${PROJECT_NAME}

Π’ Ρ„Π°ΠΉΠ» ~/.kube/config Π±ΡƒΠ΄ΡƒΡ‚ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½Ρ‹ user, cluster ΠΈ context для ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ ΠΊ кластСру Π² GKE. Π’Π°ΠΊΠΆΠ΅ Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠΉ контСкст Π±ΡƒΠ΄Π΅Ρ‚ выставлСн для ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΡ ΠΊ этому кластСру.

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΠΈΠΌ Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠΉ контСкст: $ kubectl config current-context

output:

gke_keen-${PROJECT_NAME}_us-central1-a_standard-cluster-1

Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ dev namespace: $ kubectl apply -f ./kubernetes/reddit/dev-namespace.yml

Π”Π΅ΠΏΠ»ΠΎΠΉ всСх ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ Π² namespace dev: $ kubectl apply -f ./kubernetes/reddit/ -n dev

ΠžΡ‚ΠΊΡ€Ρ‹Ρ‚ΠΈΠ΅ Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° kubernetes ΠΏΠΎΡ€Ρ‚ΠΎΠ² Π² firewall:

     firewall-rules create kubernetes-nodeports \
     --direction=INGRESS \
     --priority=1000 \
     --network=default \
     --action=ALLOW \
     --rules=tcp:30000-32767 \
     --source-ranges=0.0.0.0/0
  • НайдСм внСшний IP-адрСс любой Π½ΠΎΠ΄Ρ‹ ΠΈΠ· кластСра: $ kubectl get nodes -o wide
NAME                                                STATUS   ROLES    AGE   VERSION         EXTERNAL-IP
gke-standard-cluster-1-default-pool-2dd96181-qt7q   Ready    <none>   11h   v1.10.9-gke.5   XX.XXX.XX.XXX
gke-standard-cluster-1-default-pool-d3cd4782-7j9s   Ready    <none>   11h   v1.10.9-gke.5   XX.XXX.XX.XXX

ΠŸΠΎΡ€Ρ‚ ΠΏΡƒΠ±Π»ΠΈΠΊΠ°Ρ†ΠΈΠΈ сСрвиса ui: $ kubectl describe service ui -n dev | grep NodePort

Type:                     NodePort
NodePort:                 <unset>  32092/TCP

МоТно ΠΏΠ΅Ρ€Π΅ΠΉΡ‚ΠΈ Π½Π° любой ΠΈΠ· Π²Π½Π΅ΡˆΠ½ΠΈΡ… IP-адрСсов для открытия страницы http://XX.XXX.XX.XXX:32092

Readdit app deployed in GKE

Homework 21 (kubernetes-1)

Build Status

  • Creted new Deployment manifests in kubernetes/reddit folder:
comment-deployment.yml
mongo-deployment.yml
post-deployment.yml
ui-deployment.yml

This lab assumes you have access to the Google Cloud Platform. This lab we use MacOS.

Prerequisites

  • Install the Google Cloud SDK

Follow the Google Cloud SDK documentation to install and configure the gcloud command line utility.

Verify the Google Cloud SDK version is 218.0.0 or higher: gcloud version

  • Default Compute Region and Zone The easiest way to set default compute region: gcloud init.

Otherwise set a default compute region: gcloud config set compute/region us-west1.

Set a default compute zone: gcloud config set compute/zone us-west1-c.

Installing the Client Tools

  • Install CFSSL

The cfssl and cfssljson command line utilities will be used to provision a PKI Infrastructure and generate TLS certificates.

Installing cfssl and cfssljson using packet manager brew: brew install cfssl.

  • Verification Installing
cfssl version
  • Install kubectl

The kubectl command line utility is used to interact with the Kubernetes API Server.

  • Download and install kubectl from the official release binaries:
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
  • Verify kubectl version 1.12.0 or higher is installed:
kubectl version --client

Provisioning Compute Resources

  • Virtual Private Cloud Network Create the kubernetes-the-hard-way custom VPC network:
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom

A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.

Create the kubernetes subnet in the kubernetes-the-hard-way VPC network:

gcloud compute networks subnets create kubernetes \
  --network kubernetes-the-hard-way \
  --range 10.240.0.0/24

The 10.240.0.0/24 IP address range can host up to 254 compute instances.

  • Firewall

Create a firewall rule that allows internal communication across all protocols:

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
  --allow tcp,udp,icmp \
  --network kubernetes-the-hard-way \
  --source-ranges 10.240.0.0/24,10.200.0.0/16

Create a firewall rule that allows external SSH, ICMP, and HTTPS:

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
  --allow tcp:22,tcp:6443,icmp \
  --network kubernetes-the-hard-way \
  --source-ranges 0.0.0.0/0

List the firewall rules in the kubernetes-the-hard-way VPC network:

gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"

output

NAME                                    NETWORK                  DIRECTION  PRIORITY  ALLOW                 DENY  DISABLED
kubernetes-the-hard-way-allow-external  kubernetes-the-hard-way  INGRESS    1000      tcp:22,tcp:6443,icmp        False
kubernetes-the-hard-way-allow-internal  kubernetes-the-hard-way  INGRESS    1000      tcp,udp,icmp                False
  • Kubernetes Public IP Address

Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:

gcloud compute addresses create kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region)

Verify the kubernetes-the-hard-way static IP address was created in your default compute region:

gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
  • Compute Instances The compute instances in this lab will be provisioned using Ubuntu Server 18.04, which has good support for the containerd container runtime. Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.

  • Kubernetes Controllers Create three compute instances which will host the Kubernetes control plane:

for i in 0 1 2; do
  gcloud compute instances create controller-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --private-network-ip 10.240.0.1${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-hard-way,controller
done
  • Kubernetes Workers Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The pod-cidr instance metadata will be used to expose pod subnet allocations to compute instances at runtime.

The Kubernetes cluster CIDR range is defined by the Controller Manager's --cluster-cidr flag. In this tutorial the cluster CIDR range will be set to 10.200.0.0/16, which supports 254 subnets.

Create three compute instances which will host the Kubernetes worker nodes:

for i in 0 1 2; do
  gcloud compute instances create worker-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --metadata pod-cidr=10.200.${i}.0/24 \
    --private-network-ip 10.240.0.2${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-hard-way,worker
done
  • Verification List the compute instances in your default compute zone:
gcloud compute instances list

output

NAME          ZONE            MACHINE_TYPE               PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
controller-0  europe-west4-a  n1-standard-1                           10.240.0.10  X.X.X.X         RUNNING
controller-1  europe-west4-a  n1-standard-1                           10.240.0.11  X.X.X.X         RUNNING
controller-2  europe-west4-a  n1-standard-1                           10.240.0.12  X.X.X.X         RUNNING
worker-0      europe-west4-a  n1-standard-1                           10.240.0.20  X.X.X.X         RUNNING
worker-1      europe-west4-a  n1-standard-1                           10.240.0.21  X.X.X.X         RUNNING
worker-2      europe-west4-a  n1-standard-1                           10.240.0.22  X.X.X.X         RUNNING
  • Configuring SSH Access SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the connecting to instances documentation.

Test SSH access to the controller-0 compute instances:

gcloud compute ssh controller-0

If this is your first time connecting to a compute instance SSH keys will be generated for you.

Provisioning a CA and Generating TLS Certificates

  • Certificate Authority

Generate the CA configuration file, certificate, and private key:

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
  • Client and Server Certificates

Generate the admin client certificate and private key:

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin
  • The Kubelet Client Certificates

Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets.

Generate a certificate and private key for each Kubernetes worker node:

for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')

INTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].networkIP)')

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
  -profile=kubernetes \
  ${instance}-csr.json | cfssljson -bare ${instance}
done
  • The Controller Manager Client Certificate

Generate the kube-controller-manager client certificate and private key:

cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

  • The Kube Proxy Client Certificate

Generate the kube-proxy client certificate and private key:

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy
  • The Scheduler Client Certificate

Generate the kube-scheduler client certificate and private key:

cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler
  • The Kubernetes API Server Certificate

The kubernetes-the-hard-way static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.

Generate the Kubernetes API Server certificate and private key:

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes
  • The Service Account Key Pair

The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the managing service accounts documentation.

Generate the service-account certificate and private key:

cat > service-account-csr.json <<EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account
  • Distribute the Client and Server Certificates

Copy the appropriate certificates and private keys to each worker instance:

for instance in worker-0 worker-1 worker-2; do
  gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done

Copy the appropriate certificates and private keys to each controller instance:

for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem ${instance}:~/
done

Generating Kubernetes Configuration Files for Authentication

In this lab you will generate Kubernetes configuration files, also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.

In this section you will generate kubeconfig files for the controller manager, kubelet, kube-proxy, and scheduler clients and the admin user.

  • Kubernetes Public IP Address Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.

Retrieve the kubernetes-the-hard-way static IP address:

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')
  • The kubelet Kubernetes Configuration File

When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.

Generate a kubeconfig file for each worker node:

for instance in worker-0 worker-1 worker-2; do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
  • The kube-proxy Kubernetes Configuration File

Generate a kubeconfig file for the kube-proxy service:

  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.pem \
    --client-key=kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • The kube-controller-manager Kubernetes Configuration File

Generate a kubeconfig file for the kube-controller-manager service:

  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
  • The kube-scheduler Kubernetes Configuration File

Generate a kubeconfig file for the kube-scheduler service:

  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
  • The admin Kubernetes Configuration File

Generate a kubeconfig file for the admin user:

  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig
  • Distribute the Kubernetes Configuration Files Copy the appropriate kubelet and kube-proxy kubeconfig files to each worker instance:
for instance in worker-0 worker-1 worker-2; do
  gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done

Copy the appropriate kube-controller-manager and kube-scheduler kubeconfig files to each controller instance:

for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done

Generating the Data Encryption Config and Key

Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.

In this lab you will generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.

  • The Encryption Key

Generate an encryption key:

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
  • The Encryption Config File Create the encryption-config.yaml encryption config file:
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

Copy the encryption-config.yaml encryption config file to each controller instance:

for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp encryption-config.yaml ${instance}:~/
done

Bootstrapping the etcd Cluster

Kubernetes components are stateless and store cluster state in etcd. In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.

  • Prerequisites

The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example: gcloud compute ssh controller-0

  • Bootstrapping an etcd Cluster Member

Download and Install the etcd Binaries from the coreos/etcd GitHub project:

wget -q --show-progress --https-only --timestamping \
  "https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"

Extract and install the etcd server and the etcdctl command line utility:

  tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
  sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
  • Configure the etcd Server
  sudo mkdir -p /etc/etcd /var/lib/etcd
  sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/

The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:

INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)

Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:

ETCD_NAME=$(hostname -s)

Create the etcd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • Start the etcd Server
  sudo systemctl daemon-reload
  sudo systemctl enable etcd
  sudo systemctl start etcd
  • Verification

List the etcd cluster members:

sudo ETCDCTL_API=3 etcdctl member list \
   --endpoints=https://127.0.0.1:2379 \
   --cacert=/etc/etcd/ca.pem \
   --cert=/etc/etcd/kubernetes.pem \
   --key=/etc/etcd/kubernetes-key.pem

output:

3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379

Bootstrapping the Kubernetes Control Plane

In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.

The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example: gcloud compute ssh controller-0

  • Provision the Kubernetes Control Plane

Create the Kubernetes configuration directory:

sudo mkdir -p /etc/kubernetes/config
  • Download and Install the Kubernetes Controller Binaries

Download the official Kubernetes release binaries:

wget -q --show-progress --https-only --timestamping \
  "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
  • Install the Kubernetes binaries:
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
  • Configure the Kubernetes API Server
  sudo mkdir -p /var/lib/kubernetes/

  sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem \
    encryption-config.yaml /var/lib/kubernetes/

The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:

INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)

Create the kube-apiserver.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --enable-swagger-ui=true \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
  --event-ttl=1h \\
  --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --kubelet-https=true \\
  --runtime-config=api/all \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • Configure the Kubernetes Controller Manager

Move the kube-controller-manager kubeconfig into place:

sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/

Create the kube-controller-manager.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • Configure the Kubernetes Scheduler

Move the kube-scheduler kubeconfig into place:

sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/

Create the kube-scheduler.yaml configuration file:

cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF

Create the kube-scheduler.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • Start the Controller Services
  sudo systemctl daemon-reload
  sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
  sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
  • Enable HTTP Health Checks A Google Network Load Balancer will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port 80 and proxy the connections to the API server on https://127.0.0.1:6443/healthz.

Install a basic web server to handle HTTP health checks:

sudo apt-get install -y nginx
cat > kubernetes.default.svc.cluster.local <<EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF
  sudo mv kubernetes.default.svc.cluster.local \
    /etc/nginx/sites-available/kubernetes.default.svc.cluster.local

  sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
sudo systemctl restart nginx
sudo systemctl enable nginx
  • Verification
kubectl get componentstatuses --kubeconfig admin.kubeconfig

output:

NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}

Test the nginx HTTP health check proxy:

curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz

output:

HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 20 Jan 2019 19:54:16 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
  • RBAC for Kubelet Authorization

In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.

Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF

The Kubernetes API Server authenticates to the Kubelet as the kubernetes user using the client certificate as defined by the --kubelet-client-certificate flag.

Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
  • The Kubernetes Frontend Load Balancer

In this section you will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way static IP address will be attached to the resulting load balancer.

Provision a Network Load Balancer. Create the external load balancer network resources:

  KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
    --region $(gcloud config get-value compute/region) \
    --format 'value(address)')

  gcloud compute http-health-checks create kubernetes \
    --description "Kubernetes Health Check" \
    --host "kubernetes.default.svc.cluster.local" \
    --request-path "/healthz"

  gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
    --network kubernetes-the-hard-way \
    --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
    --allow tcp

  gcloud compute target-pools create kubernetes-target-pool \
    --http-health-check kubernetes

  gcloud compute target-pools add-instances kubernetes-target-pool \
   --instances controller-0,controller-1,controller-2

  gcloud compute forwarding-rules create kubernetes-forwarding-rule \
    --address ${KUBERNETES_PUBLIC_ADDRESS} \
    --ports 6443 \
    --region $(gcloud config get-value compute/region) \
    --target-pool kubernetes-target-pool
  • Verification Make a HTTP request for the Kubernetes version info:
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version

output:

{
  "major": "1",
  "minor": "12",
  "gitVersion": "v1.12.0",
  "gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
  "gitTreeState": "clean",
  "buildDate": "2018-09-27T16:55:41Z",
  "goVersion": "go1.10.4",
  "compiler": "gc",
  "platform": "linux/amd64"
}

Bootstrapping the Kubernetes Worker Nodes

In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, gVisor, container networking plugins, containerd, kubelet, and kube-proxy.

  • Provisioning a Kubernetes Worker Node

Install the OS dependencies:

sudo apt-get update
sudo apt-get -y install socat conntrack ipset

Download and Install Worker Binaries

wget -q --show-progress --https-only --timestamping \
  https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
  https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
  https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
  https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
  https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
  https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet

Create the installation directories:

sudo mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes

Install the worker binaries:

  sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
  sudo mv runc.amd64 runc
  chmod +x kubectl kube-proxy kubelet runc runsc
  sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
  sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
  sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
  sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /
  • Configure CNI Networking

Retrieve the Pod CIDR range for the current compute instance:

POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)

Create the bridge network configuration file:

cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
    "cniVersion": "0.3.1",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "${POD_CIDR}"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }
}
EOF

Create the loopback network configuration file:

cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
    "cniVersion": "0.3.1",
    "type": "loopback"
}
EOF

Create the containerd configuration file:

sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
  [plugins.cri.containerd]
    snapshotter = "overlayfs"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runc"
      runtime_root = ""
    [plugins.cri.containerd.untrusted_workload_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runsc"
      runtime_root = "/run/containerd/runsc"
    [plugins.cri.containerd.gvisor]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runsc"
      runtime_root = "/run/containerd/runsc"
EOF

Create the containerd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF
  • Configure the Kubelet
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/

Create the kubelet-config.yaml configuration file:

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF

Create the kubelet.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • Configure the Kubernetes Proxy
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

Create the kube-proxy-config.yaml configuration file:

cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF

Create the kube-proxy.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • Start the Worker Services
  sudo systemctl daemon-reload
  sudo systemctl enable containerd kubelet kube-proxy
  sudo systemctl start containerd kubelet kube-proxy
  • Verification

List the registered Kubernetes nodes:

gcloud compute ssh controller-0 \
  --command "kubectl get nodes --kubeconfig admin.kubeconfig"

output

NAME       STATUS   ROLES    AGE   VERSION
worker-0   Ready    <none>   10m   v1.12.0
worker-1   Ready    <none>   11m   v1.12.0
worker-2   Ready    <none>   10m   v1.12.0

Configuring kubectl for Remote Access

In this lab you will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.

Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.

Generate a kubeconfig file suitable for authenticating as the admin user:

  KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
    --region $(gcloud config get-value compute/region) \
    --format 'value(address)')

  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem

  kubectl config set-context kubernetes-the-hard-way \
    --cluster=kubernetes-the-hard-way \
    --user=admin

  kubectl config use-context kubernetes-the-hard-way
  • Verification

Check the health of the remote Kubernetes cluster:

kubectl get componentstatuses

output:

NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

List the nodes in the remote Kubernetes cluster:

kubectl get nodes

output:

NAME       STATUS   ROLES    AGE   VERSION
worker-0   Ready    <none>   14m   v1.12.0
worker-1   Ready    <none>   14m   v1.12.0
worker-2   Ready    <none>   14m   v1.12.0

Provisioning Pod Network Routes

Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes.

In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.

  • The Routing Table

Print the internal IP address and Pod CIDR range for each worker instance:

for instance in worker-0 worker-1 worker-2; do
  gcloud compute instances describe ${instance} \
    --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done

output:

10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
  • Routes

Create network routes for each worker instance:

for i in 0 1 2; do
  gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
    --network kubernetes-the-hard-way \
    --next-hop-address 10.240.0.2${i} \
    --destination-range 10.200.${i}.0/24
done

List the routes in the kubernetes-the-hard-way VPC network:

gcloud compute routes list --filter "network: kubernetes-the-hard-way"

output

NAME                            NETWORK                  DEST_RANGE     NEXT_HOP                  PRIORITY
default-route-4efe3fc4aab42a71  kubernetes-the-hard-way  0.0.0.0/0      default-internet-gateway  1000
default-route-b8c3b87a29570c17  kubernetes-the-hard-way  10.240.0.0/24  kubernetes-the-hard-way   1000
kubernetes-route-10-200-0-0-24  kubernetes-the-hard-way  10.200.0.0/24  10.240.0.20               1000
kubernetes-route-10-200-1-0-24  kubernetes-the-hard-way  10.200.1.0/24  10.240.0.21               1000
kubernetes-route-10-200-2-0-24  kubernetes-the-hard-way  10.200.2.0/24  10.240.0.22               1000

Deploying the DNS Cluster Add-on

In this lab you will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.

  • The DNS Cluster Add-on

Deploy the coredns cluster add-on:

kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml

output

serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created

List the pods created by the kube-dns deployment:

kubectl get pods -l k8s-app=kube-dns -n kube-system

output

NAME                       READY   STATUS    RESTARTS   AGE
coredns-699f8ddd77-cpr8j   1/1     Running   0          71s
coredns-699f8ddd77-zcldn   1/1     Running   0          71s
  • Verification

Create a busybox deployment:

kubectl run busybox --image=busybox:1.28 --command -- sleep 3600

List the pod created by the busybox deployment:

kubectl get pods -l run=busybox

output:

NAME                      READY   STATUS    RESTARTS   AGE
busybox-bd8fb7cbd-9bnbk   1/1     Running   0          76s

Retrieve the full name of the busybox pod:

POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")

Execute a DNS lookup for the kubernetes service inside the busybox pod:

kubectl exec -ti $POD_NAME -- nslookup kubernetes

output

Server:    10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local

Smoke Test

In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.

Create a generic secret:

kubectl create secret generic kubernetes-the-hard-way \
  --from-literal="mykey=mydata"

Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:

gcloud compute ssh controller-0 \
  --command "sudo ETCDCTL_API=3 etcdctl get \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem\
  /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"

output

00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
00000040  3a 76 31 3a 6b 65 79 31  3a d7 29 23 06 e7 16 c4  |:v1:key1:.)#....|
00000050  22 bc 75 c2 a3 21 f3 33  fc 4a c4 7e a5 70 83 30  |".u..!.3.J.~.p.0|
00000060  48 13 fe 22 9a 73 0e fc  8c f3 06 01 eb 46 24 15  |H..".s.......F$.|
00000070  59 c5 02 37 8e eb 26 d9  2f 54 1c cd 21 a4 1f 49  |Y..7..&./T..!..I|
00000080  1a cc 9a a6 27 e2 6c 0c  ce 96 da 85 36 21 2a 83  |....'.l.....6!*.|
00000090  cb b3 62 1c d8 c5 18 b0  15 95 48 cf 2c 2f 41 d5  |..b.......H.,/A.|
000000a0  d9 33 10 65 93 4f e3 55  99 3a a2 64 47 83 24 00  |.3.e.O.U.:.dG.$.|
000000b0  96 8b 07 6b 94 f5 62 05  f5 10 12 3f ae 11 97 ca  |...k..b....?....|
000000c0  9e f1 e5 54 c3 43 28 fd  36 15 9b 41 c9 19 08 65  |...T.C(.6..A...e|
000000d0  18 27 16 11 44 b6 24 fc  3f 39 2f 9b 36 3d d1 9e  |.'..D.$.?9/.6=..|
000000e0  c8 da a5 e4 2d 8a 28 bf  2b 0a                    |....-.(.+.|

The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.

  • Deployments

In this section you will verify the ability to create and manage Deployments.

Create a deployment for the nginx web server:

kubectl run nginx --image=nginx

List the pod created by the nginx deployment:

kubectl get pods -l run=nginx

output

NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-9d2ch   1/1     Running   0          22s
  • Port Forwarding

In this section you will verify the ability to access applications remotely using port forwarding.

Retrieve the full name of the nginx pod:

POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")

Forward port 8080 on your local machine to port 80 of the nginx pod:

kubectl port-forward $POD_NAME 8080:80

output

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

In a new terminal make an HTTP request using the forwarding address:

curl --head http://127.0.0.1:8080

output

HTTP/1.1 200 OK
Server: nginx/1.15.8
Date: Sun, 20 Jan 2019 21:48:12 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
Connection: keep-alive
ETag: "5c21fedf-264"
Accept-Ranges: bytes
  • Logs

In this section you will verify the ability to retrieve container logs.

Print the nginx pod logs:

kubectl logs $POD_NAME

output

127.0.0.1 - - [20/Jan/2019:21:48:12 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"

Print the nginx version by executing the nginx -v command in the nginx container:

kubectl exec -ti $POD_NAME -- nginx -v

output

nginx version: nginx/1.15.8
  • Services

In this section you will verify the ability to expose applications using a Service.

Expose the nginx deployment using a NodePort service:

kubectl expose deployment nginx --port 80 --type NodePort

Retrieve the node port assigned to the nginx service:

NODE_PORT=$(kubectl get svc nginx \
  --output=jsonpath='{range .spec.ports[0]}{.nodePort}')

Create a firewall rule that allows remote access to the nginx node port:

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
  --allow=tcp:${NODE_PORT} \
  --network kubernetes-the-hard-way

Retrieve the external IP address of a worker instance:

EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')

Make an HTTP request using the external IP address and the nginx node port:

curl -I http://${EXTERNAL_IP}:${NODE_PORT}

output

HTTP/1.1 200 OK
Server: nginx/1.15.8
Date: Sun, 20 Jan 2019 21:56:46 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
Connection: keep-alive
ETag: "5c21fedf-264"
Accept-Ranges: bytes
  • Untrusted Workloads

This section will verify the ability to run untrusted workloads using gVisor.

Create the untrusted pod:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: untrusted
  annotations:
    io.kubernetes.cri.untrusted-workload: "true"
spec:
  containers:
    - name: webserver
      image: gcr.io/hightowerlabs/helloworld:2.0.0
EOF
  • Verification

In this section you will verify the untrusted pod is running under gVisor (runsc) by inspecting the assigned worker node.

Verify the untrusted pod is running:

output

NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE
busybox-bd8fb7cbd-9bnbk   1/1     Running   0          27m   10.200.0.2   worker-0   <none>
nginx-dbddb74b8-9d2ch     1/1     Running   0          15m   10.200.0.3   worker-0   <none>
untrusted                 1/1     Running   0          67s   10.200.1.3   worker-1   <none>

Get the node name where the untrusted pod is running:

INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')

SSH into the worker node:

gcloud compute ssh ${INSTANCE_NAME}

List the containers running under gVisor:

sudo runsc --root  /run/containerd/runsc/k8s.io list

output

I0120 22:01:35.289695   18966 x:0] ***************************
I0120 22:01:35.289875   18966 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
I0120 22:01:35.289950   18966 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0120 22:01:35.290018   18966 x:0] PID: 18966
I0120 22:01:35.290088   18966 x:0] UID: 0, GID: 0
I0120 22:01:35.290158   18966 x:0] Configuration:
I0120 22:01:35.290212   18966 x:0]              RootDir: /run/containerd/runsc/k8s.io
I0120 22:01:35.290475   18966 x:0]              Platform: ptrace
I0120 22:01:35.290627   18966 x:0]              FileAccess: exclusive, overlay: false
I0120 22:01:35.290754   18966 x:0]              Network: sandbox, logging: false
I0120 22:01:35.290877   18966 x:0]              Strace: false, max size: 1024, syscalls: []
I0120 22:01:35.291000   18966 x:0] ***************************
ID                                                                 PID         STATUS      BUNDLE                                          CREATED                OWNER
353e797b41e3e5bdd183605258b66a153a61a3f7ff0eb8b0e0e7d8b6e4b3bc5c   18521       running     /run/containerd/io.containerd.runtime.v1.linux/k8s.io/353e797b41e3e5bdd183605258b66a153a61a3f7ff0eb8b0e0e7d8b6e4b3bc5c   0001-01-01T00:00:00Z
5b1e7bcf2a5cb033888650e49c3978cf429b99d97c4be5c7f5ad14e45b3015a9   18441       running     /run/containerd/io.containerd.runtime.v1.linux/k8s.io/5b1e7bcf2a5cb033888650e49c3978cf429b99d97c4be5c7f5ad14e45b3015a9   0001-01-01T00:00:00Z
I0120 22:01:35.294484   18966 x:0] Exiting with status: 0

Get the ID of the untrusted pod:

POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
  pods --name untrusted -q)

Get the ID of the webserver container running in the untrusted pod:

CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
  ps -p ${POD_ID} -q)

Use the gVisor runsc command to display the processes running inside the webserver container:

sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}

output

I0120 22:04:59.988268   19220 x:0] ***************************
I0120 22:04:59.988443   19220 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 353e797b41e3e5bdd183605258b66a153a61a3f7ff0eb8b0e0e7d8b6e4b3bc5c]
I0120 22:04:59.988521   19220 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0120 22:04:59.988604   19220 x:0] PID: 19220
I0120 22:04:59.988673   19220 x:0] UID: 0, GID: 0
I0120 22:04:59.988736   19220 x:0] Configuration:
I0120 22:04:59.988789   19220 x:0]              RootDir: /run/containerd/runsc/k8s.io
I0120 22:04:59.988910   19220 x:0]              Platform: ptrace
I0120 22:04:59.989037   19220 x:0]              FileAccess: exclusive, overlay: false
I0120 22:04:59.989160   19220 x:0]              Network: sandbox, logging: false
I0120 22:04:59.989299   19220 x:0]              Strace: false, max size: 1024, syscalls: []
I0120 22:04:59.989431   19220 x:0] ***************************
UID       PID       PPID      C         STIME     TIME      CMD
0         1         0         0         21:58     10ms      app
I0120 22:04:59.990890   19220 x:0] Exiting with status: 0

Cleaning Up

In this lab you will delete the compute resources created during this tutorial.

  • Compute Instances

Delete the controller and worker compute instances:

gcloud -q compute instances delete \
  controller-0 controller-1 controller-2 \
  worker-0 worker-1 worker-2
  • Networking

Delete the external load balancer network resources:

  gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
    --region $(gcloud config get-value compute/region)

  gcloud -q compute target-pools delete kubernetes-target-pool

  gcloud -q compute http-health-checks delete kubernetes

  gcloud -q compute addresses delete kubernetes-the-hard-way

Delete the kubernetes-the-hard-way firewall rules:

gcloud -q compute firewall-rules delete \
  kubernetes-the-hard-way-allow-nginx-service \
  kubernetes-the-hard-way-allow-internal \
  kubernetes-the-hard-way-allow-external \
  kubernetes-the-hard-way-allow-health-check

Delete the kubernetes-the-hard-way network VPC:

  gcloud -q compute routes delete \
    kubernetes-route-10-200-0-0-24 \
    kubernetes-route-10-200-1-0-24 \
    kubernetes-route-10-200-2-0-24

  gcloud -q compute networks subnets delete kubernetes

  gcloud -q compute networks delete kubernetes-the-hard-way

Homework 20 (logging-1)

Build Status

  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker-machine:
docker-machine create --driver google \
  --google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
  --google-machine-type n1-standard-1 \
  --google-open-port 5601/tcp \
  --google-open-port 9292/tcp \
  --google-open-port 9411/tcp \
logging 
  • ΠŸΠ΅Ρ€Π΅ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅ Π½Π° ΡΠΎΠ·Π΄Π°Π½Π½ΡƒΡŽ docker-machine: eval $(docker-machine env logging)

  • Π£Π·Π½Π°Ρ‚ΡŒ ip-адрСс: docker-machine ip logging

  • Новая вСрсия прилоТСния reddit

  • Π‘Π±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·ΠΎΠ²:

for i in ui post-py comment; do cd src/$i; bash docker_build.sh; cd -; done

ΠΈΠ»ΠΈ:

/src/ui $ bash docker_build.sh && docker push $USER_NAME/ui
/src/post-py $ bash docker_build.sh && docker push $USER_NAME/post
/src/comment $ bash docker_build.sh && docker push $USER_NAME/comment
  • ΠžΡ‚Π΄Π΅Π»ΡŒΠ½Ρ‹ΠΉ compose-Ρ„Π°ΠΉΠ» для систСмы логирования:
docker/docker-compose-logging.yml

version: '3.5'
services:
  fluentd:
    image: ${USERNAME}/fluentd
    ports:
      - "24224:24224"
      - "24224:24224/udp"
  elasticsearch:
    image: elasticsearch
    expose:
      - 9200
    ports:
      - "9200:9200"
  kibana:
    image: kibana
    ports:
      - "5601:5601"
  • Fluentd - инструмСнт для ΠΎΡ‚ΠΏΡ€Π°Π²ΠΊΠΈ, Π°Π³Ρ€Π΅Π³Π°Ρ†ΠΈΠΈ ΠΈ прСобразования Π»ΠΎΠ³-сообщСний:
logging/fluentd/Dockerfile

FROM fluent/fluentd:v0.12
RUN gem install fluent-plugin-elasticsearch --no-rdoc --no-ri --version 1.9.5
RUN gem install fluent-plugin-grok-parser --no-rdoc --no-ri --version 1.0.0
ADD fluent.conf /fluentd/etc
  • Π€Π°ΠΉΠ» ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΠΈ fluentd:
logging/fluentd/fluent.conf

<source>
  @type forward #ΠΏΠ»Π°Π³ΠΈΠ½ <i>in_forward</i> для ΠΏΡ€ΠΈΠ΅ΠΌΠ° Π»ΠΎΠ³ΠΎΠ²
  port 24224
  bind 0.0.0.0
</source>
<match *.**>
  @type copy #ΠΏΠ»Π°Π³ΠΈΠ½ copy
  <store>
    @type elasticsearch #для пСрСнаправлСния всСх входящих Π»ΠΎΠ³ΠΎΠ² Π² elasticseacrh
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix fluentd
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name access_log
    tag_key @log_name
    flush_interval 1s
  </store>
  <store>
    @type stdout #Π° Ρ‚Π°ΠΊΠΆΠ΅ Π² stdout
  </store>
</match>
  • Π‘Π±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·Π° fluentd: docker build -t $USER_NAME/fluentd .

  • ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€ Π»ΠΎΠ³ΠΎΠ² post сСрвиса: docker-compose logs -f post

  • Π”Ρ€Π°ΠΉΠ²Π΅Ρ€ для логирования для сСрвиса post Π²Π½ΡƒΡ‚Ρ€ΠΈ compose-Ρ„Π°ΠΉΠ»Π°:

docker/docker-compose.yml

version: '3.5'
services:
  post:
    image: ${USER_NAME}/post
    environment:
      - POST_DATABASE_HOST=post_db
      - POST_DATABASE=posts
    depends_on:
      - post_db
    ports:
      - "5000:5000"
    logging:
      driver: "fluentd"
      options:
        fluentd-address: localhost:24224
        tag: service.post
  • Запуск инфраструктуры Ρ†Π΅Π½Ρ‚Ρ€Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½Π½ΠΎΠΉ систСмы логирования ΠΈ пСрСзапуск сСрвисов прилоТСния:
docker-compose -f docker-compose-logging.yml up -d
docker-compose down
docker-compose up -d 
  • Kibana Π±ΡƒΠ΄Π΅Ρ‚ доступна ΠΏΠΎ адрСсу http://logging-ip:5061. НСобходимо ΡΠΎΠ·Π΄Π°Ρ‚ΡŒ индСкс ΠΏΠ°Ρ‚Ρ‚Π΅Ρ€Π½ fluentd-*
  • ПолС log Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Π° elasticsearch содСрТит Π² сСбС JSON-ΠΎΠ±ΡŠΠ΅ΠΊΡ‚. НСобходимо Π²Ρ‹Π΄Π΅Π»ΠΈΡ‚ΡŒ эту ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ Π² поля, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΈΠΌΠ΅Ρ‚ΡŒ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡ‚ΡŒ ΠΏΡ€ΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡ‚ΡŒ ΠΏΠΎ Π½ΠΈΠΌ поиск. Π­Ρ‚ΠΎ достигаСтся Π·Π° счСт использования Ρ„ΠΈΠ»ΡŒΡ‚Ρ€ΠΎΠ² для выдСлСния Π½ΡƒΠΆΠ½ΠΎΠΉ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ
  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ Ρ„ΠΈΠ»ΡŒΡ‚Ρ€Π° для парсинга json-Π»ΠΎΠ³ΠΎΠ², приходящих ΠΎΡ‚ post-сСрвиса, Π² ΠΊΠΎΠ½Ρ„ΠΈΠ³ fluentd:
logging/fluentd/fluent.conf

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<filter service.post>
  @type parser
  format json
  key_name log
</filter>
<match *.**>
  @type copy
...
  • ΠŸΠ΅Ρ€Π΅ΡΠ±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·Π° ΠΈ пСрСзапуск сСрвиса fluentd:
logging/fluentd $ docker build -t $USER_NAME/fluentd .
docker/ $ docker-compose -f docker-compose-logging.yml up -d fluentd 
  • По Π°Π½Π°Π»ΠΎΠ³ΠΈΠΈ с post-сСрвисом Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ для ui-сСрвиса ΠΎΠΏΡ€Π΅Π΄Π΅Π»ΠΈΡ‚ΡŒ Π΄Ρ€Π°ΠΉΠ²Π΅Ρ€ для логирования fluentd Π² compose-Ρ„Π°ΠΉΠ»Π΅:
docker/docker-compose.yml

...
logging:
  driver: "fluentd"
  options:
    fluentd-address: localhost:24224
    tag: service.post
...
  • ΠŸΠ΅Ρ€Π΅Π·Π°ΠΏΡƒΡΠΊ ui сСрвиса:
docker-compose stop ui
docker-compose rm ui
docker-compose up -d 
  • Когда ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΈΠ»ΠΈ сСрвис Π½Π΅ ΠΏΠΈΡˆΠ΅Ρ‚ структурированныС Π»ΠΎΠ³ΠΈ, ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡŽΡ‚ΡΡ рСгулярныС выраТСния для ΠΈΡ… парсинга. Π’Ρ‹Π΄Π΅Π»Π΅Π½ΠΈΠ΅ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ ΠΈΠ· Π»ΠΎΠ³Π° UI-сСрвиса Π² поля:
logging/fluentd/fluent.conf

<filter service.ui>
  @type parser
  format /\[(?<time>[^\]]*)\]  (?<level>\S+) (?<user>\S+)[\W]*service=(?<service>\S+)[\W]*event=(?<event>\S+)[\W]*(?:path=(?<path>\S+)[\W]*)?request_id=(?<request_id>\S+)[\W]*(?:remote_addr=(?<remote_addr>\S+)[\W]*)?(?:method= (?<method>\S+)[\W]*)?(?:response_status=(?<response_status>\S+)[\W]*)?(?:message='(?<message>[^\']*)[\W]*)?/
  key_name log
</filter>
  • Для облСгчСния Π·Π°Π΄Π°Ρ‡ΠΈ парсинга вмСсто стандартных рСгулярок ΠΌΠΎΠΆΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ grok-ΡˆΠ°Π±Π»ΠΎΠ½Ρ‹. Grok - это ΠΈΠΌΠ΅Π½ΠΎΠ²Π°Π½Π½Ρ‹Π΅ ΡˆΠ°Π±Π»ΠΎΠ½Ρ‹ рСгулярных Π²Ρ‹Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ. МоТно ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Π³ΠΎΡ‚ΠΎΠ²Ρ‹ΠΉ regexp, сославшись Π½Π° Π½Π΅Π³ΠΎ ΠΊΠ°ΠΊ Π½Π° Ρ„ΡƒΠ½ΠΊΡ†ΠΈΡŽ:
docker/fluentd/fluent.conf

...
<filter service.ui>
  @type parser
  format grok
  grok_pattern %{RUBY_LOGGER}
  key_name log
</filter> 
...
  • Π§Π°ΡΡ‚ΡŒ Π»ΠΎΠ³ΠΎΠ² Π½ΡƒΠΆΠ½ΠΎ Π΅Ρ‰Π΅ Ρ€Π°ΡΠΏΠ°Ρ€ΡΠΈΡ‚ΡŒ. Для этого ΠΌΠΎΠΆΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ нСсколько Grok-ΠΎΠ² ΠΏΠΎ ΠΎΡ‡Π΅Ρ€Π΅Π΄ΠΈ:
docker/fluentd/fluent.conf

<filter service.ui>
  @type parser
  format grok
  grok_pattern service=%{WORD:service} \| event=%{WORD:event} \| request_id=%{GREEDYDATA:request_id} \| message='%{GREEDYDATA:message}'
  key_name message
  reserve_data true
</filter>

<filter service.ui>
  @type parser
  format grok
  grok_pattern service=%{WORD:service} \| event=%{WORD:event} \| path=%{GREEDYDATA:path} \| request_id=%{GREEDYDATA:request_id} \| remote_addr=%{IP:remote_addr} \| method= %{WORD:method} \| response_status=%{WORD:response_status}
  key_name message
  reserve_data true
</filter>

Homework 19 (monitoring-2)

Build Status

  • compose-monitoring.yml - для ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Π° ΠΏΡ€ΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ Для запуска ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ: docker-compose -f docker-compose-monitoring.yml up -d

  • cAdvisor - для наблюдСния Π·Π° состояниСм Docker-ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ² (использованиС CPU, памяти, объСм сСтСвого Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠ°) БСрвис ΠΏΠΎΠΌΠ΅Ρ‰Π΅Π½ Π² ΠΎΠ΄Π½Ρƒ ΡΠ΅Ρ‚ΡŒ с Prometheus, для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ с cAdvisor'Π°

  • Π’ Prometheus Π΄ΠΎΠ±Π°Π²Π»Π΅Π½Π° информация ΠΎ Π½ΠΎΠ²ΠΎΠΌ сСврисС:

- job_name: 'cadvisor'
  static_configs:
    - targets:
      - 'cadvisor:8080' 

ПослС внСсСния ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠ° пСрСсборка ΠΎΠ±Ρ€Π°Π·Π°:

cd monitoring/prometheus
docker build -t $USER_NAME/prometheus .
  • Запуск сСрвисов:
docker-compose up -d
docker-compose -f docker-compose-monitoring.yml up -d 
  • Π˜Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡ ΠΈΠ· cAdvisor Π±ΡƒΠ΄Π΅Ρ‚ доступна ΠΏΠΎ адрСсу http://docker-machine-host-ip:8080

  • Π”Π°Π½Π½Ρ‹Π΅ Ρ‚Π°ΠΊΠΆΠ΅ ΡΠΎΠ±ΠΈΡ€Π°ΡŽΡ‚ΡΡ Π² Prometheus

  • Для Π²ΠΈΠ·ΡƒΠ°Π»ΠΈΠ·Π°Ρ†ΠΈΠΈ Π΄Π°Π½Π½Ρ‹Ρ… слСдуСт ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Graphana:

services:
...
  grafana:
    image: grafana/grafana:5.0.0
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=secret
    depends_on:
      - prometheus
    ports:
      - 3000:3000
volumes:
  grafana_data:
  • Запуск: docker-compose -f docker-compose-monitoring.yml up -d grafana

  • Grapahana доступна ΠΏΠΎ адрСсу: http://docker-mahine-host-ip:3000

  • Настройка источника Π΄Π°Π½Π½Ρ‹Ρ… Π² Graphana:

Type: Prometheus
URL: http://prometheus:9090
Access: proxy
  • Для сбора ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ ΠΎ post сСрвисС Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ Π΄ΠΎΠ±Π°Π²ΠΈΡ‚ΡŒ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ Π² Ρ„Π°ΠΉΠ» prometheus.yml, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Prometheus Π½Π°Ρ‡Π°Π» ΡΠΎΠ±ΠΈΡ€Π°Ρ‚ΡŒ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠΈ ΠΈ с Π½Π΅Π³ΠΎ:
scrape_configs:
...
  - job_name: 'post'
  static_configs:
   - targets:
     - 'post:5000'
  • ΠŸΠ΅Ρ€Π΅ΡΠ±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·Π°:
export USER_NAME=username
docker build -t $USER_NAME/prometheus .
  • ΠŸΠ΅Ρ€Π΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Docker инфраструктуры ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Π°:
docker-compose -f docker-compose-monitoring.yml down
docker-compose -f docker-compose-monitoring.yml up -d 
  • Π—Π°Π³Ρ€ΡƒΠΆΠ΅Π½Π½Ρ‹Π΅ Ρ„Π°ΠΉΠ» Π΄Π°ΡˆΠ±ΠΎΠ°Ρ€Π΄ΠΎΠ² располоТСны Π² Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΠΈ monitoring/grafana/dashboards/

  • Alertmanager - Π΄ΠΎΠΏΠΎΠ»Π½ΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹ΠΉ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚ для систСмы ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Π° Prometheus

  • Π‘Π±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·Π° для alertmanager'Π° - Ρ„Π°ΠΉΠ» monitoring/alertmanager/Dockerfile:

FROM prom/alertmanager:v0.14.0
ADD config.yml /etc/alertmanager/ 
  • Π‘ΠΎΠ΄Π΅Ρ€ΠΆΠΈΠΌΠΎΠ΅ Ρ„Π°ΠΉΠ»Π° config.yml:
global:
  slack_api_url: 'https://hooks.slack.com/services/$token/$token/$token'
route:
  receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
  slack_configs:
  - channel: '#userchannel'
  • БСрвис Π°Π»Π΅Ρ€Ρ‚ΠΈΠ½Π³Π° Π² docker-compose-monitoring.yml:
services:
...
  alertmanager:
    image: ${USER_NAME}/alertmanager
    command:
      - '--config.file=/etc/alertmanager/config.yml'
  ports:
    - 9093:9093 
  • Π€Π°ΠΉΠ» с условиями, ΠΏΡ€ΠΈ ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Ρ… Π΄ΠΎΠ»ΠΆΠ΅Π½ ΡΡ€Π°Π±Π°Ρ‚Ρ‹Π²Π°Ρ‚ΡŒ Π°Π»Π΅Ρ€Ρ‚ ΠΈ ΠΏΠΎΡΡ‹Π»Π°Ρ‚ΡŒΡΡ Alertmanager'Ρƒ: monitoring/prometheus/alerts.yml
  • ΠŸΡ€ΠΎΡΡ‚ΠΎΠΉ Π°Π»Π΅Ρ€Ρ‚, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Π±ΡƒΠ΄Π΅Ρ‚ ΡΡ€Π°Π±Π°Ρ‚Ρ‹Π²Π°Ρ‚ΡŒ Π² ситуации, ΠΊΠΎΠ³Π΄Π° ΠΎΠ΄Π½Π° ΠΈΠ· Π½Π°Π±Π»ΡŽΠ΄Π°Π΅ΠΌΡ‹Ρ… систСм (endpoint) нСдоступна для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ (Π² этом случаС ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊΠ° up с Π»Π΅ΠΉΠ±Π»ΠΎΠΌ instance Ρ€Π°Π²Π½Ρ‹ΠΌ ΠΈΠΌΠ΅Π½ΠΈ Π΄Π°Π½Π½ΠΎΠ³ΠΎ endpoint'Π° Π±ΡƒΠ΄Π΅Ρ‚ Ρ€Π°Π²Π½Π° Π½ΡƒΠ»ΡŽ):
groups:
  - name: alert.rules
    rules:
    - alert: InstanceDown
      expr: up == 0
      for: 1m
      labels:
        severity: page
      annotations:
        description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minute'
        summary: 'Instance {{ $labels.instance }} down'
  • Π€Π°ΠΉΠ» alerts.yml Ρ‚Π°ΠΊΠΆΠ΅ Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±Ρ‹Ρ‚ΡŒ скопирован Π² сСрвис prometheus:
ADD alerts.yml /etc/prometheus/
  • ΠŸΠ΅Ρ€Π΅ΡΠ±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·Π° prometheus:
docker build -t $USER_NAME/prometheus .
  • ΠŸΠ΅Ρ€Π΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Docker инфраструктуры ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Π°:
docker-compose down -d
docker-compose -f docker-compose-monitoring.yml down
docker-compose up -d
docker-compose -f docker-compose-monitoring.yml up -d 
  • ΠŸΡƒΡˆ всСх ΠΎΠ±Ρ€Π°Π·ΠΎΠ² Π² dockerhub:
docker login
docker push $USER_NAME/ui
docker push $USER_NAME/comment
docker push $USER_NAME/post
docker push $USER_NAME/prometheus
docker push $USER_NAME/alertmanager 

Бсылка Π½Π° собранныС ΠΎΠ±Ρ€Π°Π·Ρ‹ Π½Π° DockerHub

Homework 18 (monitoring-1)

Build Status

  • ΠŸΡ€Π°Π²ΠΈΠ»ΠΎ Ρ„Π°Π΅Ρ€Π²ΠΎΠ»Π° для Prometheus ΠΈ Puma:
gcloud compute firewall-rules create prometheus-default --allow tcp:9090
gcloud compute firewall-rules create puma-default --allow tcp:9292 
  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker-host:
export GOOGLE_PROJECT=_project_id_

docker-machine create --driver google \
    --google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
    --google-machine-type n1-standard-1 \
    --google-zone europe-west1-b \
    docker-host
  • ΠŸΠ΅Ρ€Π΅ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅ Π½Π° docker-host:
eval $(docker-machine env docker-host)
  • Запуск Prometheus ΠΈΠ· Π³ΠΎΡ‚ΠΎΠ²ΠΎΠ³ΠΎ ΠΎΠ±Ρ€Π°Π·Π° с DockerHub:
docker run --rm -p 9090:9090 -d --name prometheus  prom/prometheus

Prometheus Π±ΡƒΠ΄Π΅Ρ‚ Π·Π°ΠΏΡƒΡ‰Π΅Π½ ΠΏΠΎ адрСсу http://docker-host-ip:9090/ Π£Π·Π½Π°Ρ‚ΡŒ docker-host-ip ΠΌΠΎΠΆΠ½ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ: docker-machine ip docker-host

  • ΠžΡΡ‚Π°Π½ΠΎΠ²ΠΊΠ° ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°: docker stop prometheus

  • Π€Π°ΠΉΠ» monitoring/prometheus/Dockerfile:

FROM prom/prometheus:v2.1.0
ADD prometheus.yml /etc/prometheus/

Π€Π°ΠΉΠ» monitoring/prometheus/prometheus.yml:

---
global:
  scrape_interval: '5s' #частота сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ
scrape_configs:
  - job_name: 'prometheus'  #Π΄ΠΆΠΎΠ±Ρ‹
    static_configs:
      - targets:
        - 'localhost:9090' #адрСса для сбора ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ
  - job_name: 'ui'
    static_configs:
      - targets:
        - 'ui:9292'
  - job_name: 'comment'
    static_configs:
      - targets:
        - 'comment:9292'
  • Π‘Π±ΠΎΡ€ΠΊΠ° docker-ΠΎΠ±Ρ€Π°Π·Π°:
export USER_NAME=username
docker build -t $USER_NAME/prometheus .
  • Π‘Π±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·ΠΎΠ² ΠΏΡ€ΠΈ ΠΏΠΎΠΌΠΎΡ‰ΠΈ скриптов docker_build.sh Π² Π΄ΠΈΡ€Π΅ΠΊΡ‚ΠΎΡ€ΠΈΠΈ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ сСрвиса:
cd src/ui & ./docker_build.sh
cd src/post-py & ./docker_build.sh
cd src/comment & ./docker_build.sh
  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ сСрвиса Prometheus Π² docker/Dockerfile:
  prometheus:
    image: ${USERNAME}/prometheus
    ports:
      - '9090:9090'
    volumes:
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--storage.tsdb.retention=1d'
    networks:
      - back_net
      - front_net
  • Запуск микросСрвисов:
docker-compose up -d 
services:
...
  node-exporter:
    image: prom/node-exporter:v0.15.2
    user: root
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.ignored-mount-points="^/(sys|proc|dev|host|etc)($$|/)"'

Job для Prometheus (prometheus.yml):

  - job_name: 'node'
    static_configs:
      - targets:
        - 'node-exporter:9100'
  • ΠŸΠ΅Ρ€Π΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΎΠ±Ρ€Π°Π·ΠΎΠ²:
cd /monitoring/prometheus && docker build -t $USER_NAME/prometheus .

ΠŸΠ΅Ρ€Π΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ сСрвисов:

docker-compose down
docker-compose up -d 
  • Push ΠΎΠ±Ρ€Π°Π·ΠΎΠ² Π½Π° DockerHub:
docker login
docker push $USER_NAME/ui:1.0
docker push $USER_NAME/comment:1.0
docker push $USER_NAME/post:1.0
docker push $USER_NAME/prometheus

Бсылка Π½Π° собранныС ΠΎΠ±Ρ€Π°Π·Ρ‹ Π½Π° DockerHub

Homework 17 (gitlab-ci-2)

Build Status

  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Π½ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° example2

  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° Π² username_microservices

git checkout -b gitlab-ci-2
git remote add gitlab2 http://vm-ip/homework/example2.git
git push gitlab2 gitlab-ci-2
  • Dev-ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠ΅: ИзмСнСниС ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π° Ρ‚Π°ΠΊΠΈΠΌ ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ job deploy стал ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½ΠΈΠ΅ΠΌ окруТСния dev, Π½Π° ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠ΅ условно Π±ΡƒΠ΄Π΅Ρ‚ Π²Ρ‹ΠΊΠ°Ρ‚Ρ‹Π²Π°Ρ‚ΡŒΡΡ ΠΊΠ°ΠΆΠ΄ΠΎΠ΅ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ Π² ΠΊΠΎΠ΄Π΅ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π°:
  1. ΠŸΠ΅Ρ€Π΅ΠΈΠΌΠ΅Π½ΡƒΠ΅ΠΌ deploy stage Π² review
  2. deploy_job Π·Π°ΠΌΠ΅Π½ΠΈΠΌ Π½Π° deploy_dev_job
  3. Π”ΠΎΠ±Π°Π²ΠΈΠΌ environment
name: dev
url: http://dev.example.com

Π’ Ρ€Π°Π·Π΄Π΅Π»Π΅ Operations - Environment появится ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠ΅ dev

  • Π”Π²Π° Π½ΠΎΠ²Ρ‹Ρ… этапа: stage ΠΈ production. Stage Π±ΡƒΠ΄Π΅Ρ‚ ΡΠΎΠ΄Π΅Ρ€ΠΆΠ°Ρ‚ΡŒ job, ΠΈΠΌΠΈΡ‚ΠΈΡ€ΡƒΡŽΡ‰ΠΈΠΉ Π²Ρ‹ΠΊΠ°Ρ‚ΠΊΡƒ Π½Π° staging ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠ΅, production - Π½Π° production ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠ΅. Job Π±ΡƒΠ΄ΡƒΡ‚ Π·Π°ΠΏΡƒΡΠΊΠ°Ρ‚ΡŒΡΡ с ΠΊΠ½ΠΎΠΏΠΊΠΈ

  • Π”ΠΈΡ€Π΅ΠΊΡ‚ΠΈΠ²Π° only описываСт список условий, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Π΅ Π΄ΠΎΠ»ΠΆΠ½Ρ‹ Π±Ρ‹Ρ‚ΡŒ истинны, Ρ‡Ρ‚ΠΎΠ±Ρ‹ job ΠΌΠΎΠ³ Π·Π°ΠΏΡƒΡΡ‚ΠΈΡ‚ΡŒΡΡ. РСгулярноС Π²Ρ‹Ρ€Π°ΠΆΠ΅Π½ΠΈΠ΅ /^\d+\.\d+\.\d+/ ΠΎΠ·Π½Π°Ρ‡Π°Π΅Ρ‚, Ρ‡Ρ‚ΠΎ Π΄ΠΎΠ»ΠΆΠ΅Π½ ΡΡ‚ΠΎΡΡ‚ΡŒ semver тэг Π² git, Π½Π°ΠΏΡ€ΠΈΠΌΠ΅Ρ€, 2.4.10

  • ΠŸΠΎΠΌΠ΅Ρ‚ΠΊΠ° Ρ‚Π΅ΠΊΡƒΡ‰Π΅Π³ΠΎ ΠΊΠΎΠΌΠΌΠΈΡ‚Π° тэгом:

git tag 2.4.10
  • ΠŸΡƒΡˆ с тэгами:
git push gitlab2 gitlab-ci-2 --tags
  • ДинамичСскиС окруТСния позволяСт Π²Π°ΠΌ ΠΈΠΌΠ΅Ρ‚ΡŒ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΉ стСнд для ΠΊΠ°ΠΆΠ΄ΠΎΠΉ feature-Π²Π΅Ρ‚ΠΊΠΈ Π² git. ΠžΠΏΡ€Π΅Π΄Π΅Π»ΡΡŽΡ‚ΡΡ динамичСскиС окруТСния с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ…, доступных Π² .gitlab-ci.yml. Job опрСдСляСт динамичСскоС ΠΎΠΊΡ€ΡƒΠΆΠ΅Π½ΠΈΠ΅ для ΠΊΠ°ΠΆΠ΄ΠΎΠΉ Π²Π΅Ρ‚ΠΊΠΈ Π² Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΈ, ΠΊΡ€ΠΎΠΌΠ΅ Π²Π΅Ρ‚ΠΊΠΈ master
branch review:
  stage: review
  script: echo "Deploy to $CI_ENVIRONMENT_SLUG"
  environment:
    name: branch/$CI_COMMIT_REF_NAME
    url: http://$CI_ENVIRONMENT_SLUG.example.com
  only:
    - branches
  except:
    - master

Homework 16 (gitlab-ci-1)

Build Status

  • Установка Docker:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce docker-compose
  • ΠŸΠΎΠ΄Π³ΠΎΡ‚ΠΎΠ²ΠΊΠ° окруТСния:
mkdir -p /srv/gitlab/config /srv/gitlab/data /srv/gitlab/logs
cd /srv/gitlab/
touch docker-compose.yml

docker-compose.yml:

web:
  image: 'gitlab/gitlab-ce:latest'
  restart: always
  hostname: 'gitlab.example.com'
  environment:
    GITLAB_OMNIBUS_CONFIG: |
      external_url 'http://<VM-IP>'
  ports:
    - '80:80'
    - '443:443'
    - '2222:22'
  volumes:
    - '/srv/gitlab/config:/etc/gitlab'
    - '/srv/gitlab/logs:/var/log/gitlab'
    - '/srv/gitlab/data:/var/opt/gitlab'
  • Запуск Gitlab CI: docker-compose up -d

  • GUI GitLab: ΠžΡ‚ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅ рСгистрации, созданиС Π³Ρ€ΡƒΠΏΠΏΡ‹ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚ΠΎΠ² homework, созданиС ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° example

  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ remote Π² ΠΏΡ€ΠΎΠ΅ΠΊΡ‚ microservices: git remote add gitlab http://<ip>/homework/example.git

  • Push Π² Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ: http://35.204.52.154/homework/example

  • ΠžΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½ΠΈΠ΅ CI/CD Pipeline ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° производится Π² Ρ„Π°ΠΉΠ»Π΅ .gitlab-ci.yml:

stages:
  - build
  - test
  - deploy

build_job:
  stage: build
  script:
    - echo 'Building'

test_unit_job:
  stage: test
  script:
    - echo 'Testing 1'

test_integration_job:
  stage: test
  script:
    - echo 'Testing 2'

deploy_job:
  stage: deploy
  script:
    - echo 'Deploy'
  • Установка GitLab Runner:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
  • Запуск runner'Π°: docker exec -it gitlab-runner gitlab-runner register

  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ исходного ΠΊΠΎΠ΄Π° Π² Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΠΉ:

git clone https://github.com/express42/reddit.git && rm -rf ./reddit/.git
git add reddit/
git commit -m 'Add reddit app'
git push gitlab gitlab-ci-1
  • ИзмСнСниС описания ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π° Π² .gitlab-ci.yml:
image: ruby:2.4.2
stages:
...
variables:
  DATABASE_URL: 'mongodb://mongo/user_posts'
before_script:
  - cd reddit
  - bundle install
...
test_unit_job:
  stage: test
  services:
    - mongo:latest
  script:
    - ruby simpletest.rb
...
  • Π’ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π΅ Π²Ρ‹ΡˆΠ΅ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ Π²Ρ‹Π·ΠΎΠ² reddit/simpletest.rb:
require_relative './app'
require 'test/unit'
require 'rack/test'

set :environment, :test

class MyAppTest < Test::Unit::TestCase
  include Rack::Test::Methods

  def app
    Sinatra::Application
  end

  def test_get_request
    get '/'
    assert last_response.ok?
  end
end
  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ Π±ΠΈΠ±Π»ΠΈΠΎΡ‚Π΅ΠΊΠΈ для тСстирования Π² reddit/Gemfile: gem 'rack-test'

Homework 15 (docker-4)

Build Status

  • ΠŸΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅ ΠΊ docker-host: eval $(docker-machine env docker-host)

  • Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π° joffotron/docker-net-tools с Π½Π°Π±ΠΎΡ€ΠΎΠΌ сСтСвых ΡƒΡ‚ΠΈΠ»ΠΈΡ‚: docker run -ti --rm --network none joffotron/docker-net-tools -c ifconfig

Использован none-driver, Π²Ρ‹Π²ΠΎΠ΄ Ρ€Π°Π±ΠΎΡ‚Ρ‹ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°:

lo Link encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   UP LOOPBACK RUNNING  MTU:65536  Metric:1
   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
  • Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π° Π² сСтСвом пространствС docker-хоста: docker run -ti --rm --network host joffotron/docker-net-tools -c ifconfig

Запуск ipconfig Π½Π° docker-host'Π΅ ΠΏΡ€ΠΈΠ²Π΅Π΄Π΅Ρ‚ ΠΊ Π°Π½Π°Π»ΠΎΠ³ΠΈΡ‡Π½ΠΎΠΌΡƒ Π²Ρ‹Π²ΠΎΠ΄Ρƒ: docker-machine ssh docker-host ifconfig

  • Запуст nginx Π² Ρ„ΠΎΠ½Π΅ Π² сСтСвом пространствС docker-host: docker run --network host -d nginx

ΠŸΡ€ΠΈ ΠΏΠΎΠ²Ρ‚ΠΎΡ€Π½ΠΎΠΌ Π²Ρ‹ΠΏΠΎΠ»Π½Π΅Π½ΠΈΠΈ ΠΊΠΎΠΌΠ°Π½Π΄Ρ‹ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΠΌ ΠΎΡˆΠΈΠ±ΠΊΡƒ:

2018/11/30 19:50:53 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)

По ΠΏΡ€ΠΈΡ‡ΠΈΠ½Π΅ Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎ ΠΏΠΎΡ€Ρ‚ 80 ΡƒΠΆΠ΅ занят.

  • Для просмотра ΡΡƒΡ‰Π΅ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΡ… net-namespaces Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ Π²Ρ‹ΠΏΠΎΠ»Π½ΠΈΡ‚ΡŒ Π½Π° docker-host: sudo ln -s /var/run/docker/netns /var/run/netns ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€: sudo ip netns
  • ΠŸΡ€ΠΈ запускС ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π° с ΡΠ΅Ρ‚ΡŒΡŽ host net-namespace ΠΎΠ΄ΠΈΠ½ - default.
  • ΠŸΡ€ΠΈ запускС ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π° с ΡΠ΅Ρ‚ΡŒΡŽ none Π² спсикС добавится id net-namespace. Π’Ρ‹Π²ΠΎΠ΄ списка net-namespac'ΠΎΠ²:
user@docker-host:~$ sudo ip net
88f8a9be77ca
default

МоТно Π²Ρ‹ΠΏΠΎΠ»Π½ΠΈΡ‚ΡŒ ΠΊΠΎΠΌΠ°Π½Π΄Ρƒ Π² Π²Ρ‹Π±Ρ€Π°Π½Π½ΠΎΠΌ net-namespace:

user@docker-host:~$ sudo ip netns exec 88f8a9be77ca ifconfig
lo  Link encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ bridge-сСти: docker network create reddit --driver bridge

  • Запуск ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° reddit с использованиСм bridge-сСти:

docker run -d --network=reddit mongo:latest
docker run -d --network=reddit ozyab/post:1.0
docker run -d --network=reddit ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:1.0

Π’ Π΄Π°Π½Π½ΠΎΠΉ ΠΊΠΎΠ½Ρ„ΠΈΠ³ΡƒΡ€Π°Ρ†ΠΈΠΈ web-сСрвис puma Π½Π΅ смоТСт ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡ΠΈΡ‚ΡŒΡΡ ΠΊ Π‘Π” mongodb.

БСрвисы ΡΡΡ‹Π»Π°ΡŽΡ‚ΡΡ Π΄Ρ€ΡƒΠ³ Π½Π° Π΄Ρ€ΡƒΠ³Π° ΠΏΠΎ dns-ΠΈΠΌΠ΅Π½Π°ΠΌ, прописанным Π² ENV-ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Ρ… Dockerfil'Π°. ВстроСнный DNS docker'Π° Π½ΠΈΡ‡Π΅Π³ΠΎ Π½Π΅ Π·Π½Π°Π΅Ρ‚ ΠΎΠ± этих ΠΈΠΌΠ΅Π½Π°Ρ….

ΠŸΡ€ΠΈΡΠ²ΠΎΠ΅Π½ΠΈΠ΅ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°ΠΌ ΠΈΠΌΠ΅Π½ ΠΈΠ»ΠΈ сСтСвых алиасов ΠΏΡ€ΠΈ стартС:

--name <name> (max 1 имя)
--network-alias <alias-name> (1 ΠΈΠ»ΠΈ Π±ΠΎΠ»Π΅Π΅)
  • Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ² с сСтСвыми алиасами:
docker run -d --network=reddit --network-alias=post_db --network-alias=comment_db mongo:latest
docker run -d --network=reddit --network-alias=post ozyab/post:1.0
docker run -d --network=reddit --network-alias=comment ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:1.0
  • Запуск ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° Π² 2-Ρ… bridge сСтях. БСрвис ui Π½Π΅ ΠΈΠΌΠ΅Π΅Ρ‚ доступа ΠΊ Π±Π°Π·Π΅ Π΄Π°Π½Π½Ρ‹Ρ….

Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker-сСтСй:

docker network create back_net --subnet=10.0.2.0/24
docker network create front_net --subnet=10.0.1.0/24

Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ²:

docker run -d --network=front_net -p 9292:9292 --name ui ozyab/ui:1.0
docker run -d --network=back_net --name comment ozyab/comment:1.0
docker run -d --network=back_net --name post ozyab/post:1.0
docker run -d --network=back_net --name mongo_db  --network-alias=post_db --network-alias=comment_db mongo:latest 

Docker ΠΏΡ€ΠΈ ΠΈΠ½ΠΈΡ†ΠΈΠ°Π»ΠΈΠ·Π°Ρ†ΠΈΠΈ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π° ΠΌΠΎΠΆΠ΅Ρ‚ ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡ΠΈΡ‚ΡŒ ΠΊ Π½Π΅ΠΌΡƒ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ 1 ΡΠ΅Ρ‚ΡŒ, поэтому ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Ρ‹ comment ΠΈ post Π½Π΅ видят ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ ui ΠΈΠ· сосСдних сСтСй.

НуТно ΠΏΠΎΠΌΠ΅ΡΡ‚ΠΈΡ‚ΡŒ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Ρ‹ post ΠΈ comment Π² ΠΎΠ±Π΅ сСти. Π”ΠΎΠΏΠΎΠ»Π½ΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹Π΅ сСти ΠΏΠΎΠ΄ΠΊΠ»ΡŽΡ‡Π°ΡŽΡ‚ΡΡ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ: docker network connect <network> <container>:

docker network connect front_net post
docker network connect front_net comment 

Установка ΠΏΠ°ΠΊΠ΅Ρ‚Π° bridge-utils:

docker-machine ssh docker-host
sudo apt-get update && sudo apt-get install bridge-utils

Π’Ρ‹ΠΏΠΎΠ»Π½ΠΈΠ² docker network ls ΠΌΠΎΠΆΠ½ΠΎ ΡƒΠ²ΠΈΠ΄Π΅Ρ‚ΡŒ список Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹Ρ… сСтСй docker'Π°. ifconfig | grep br ΠΏΠΎΠΊΠ°ΠΆΠ΅Ρ‚ список bridge-интСрфСйсов:

br-45935d0f2bbf Link encap:Ethernet  HWaddr 02:42:6d:5a:8b:7e
br-45bbc0c70de1 Link encap:Ethernet  HWaddr 02:42:94:69:ab:35
br-b6342f9c65f2 Link encap:Ethernet  HWaddr 02:42:9a:b1:73:d9

МоТно ΠΏΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ ΠΎ ΠΊΠ°ΠΆΠ΄ΠΎΠΌ bridge-интСрфСйсС ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ brctl show <interface>:

docker-user@docker-host:~$brctl show br-45935d0f2bbf
bridge name      bridge id          STP enabled   interfaces
br-45935d0f2bbf  8000.02426d5a8b7e  no            veth05b2946
                                                  veth2f50985
                                                  vetha882d28

veth-интСрфСйс - Ρ‡Π°ΡΡ‚ΡŒ Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½ΠΎΠΉ ΠΏΠ°Ρ€Ρ‹ интСрфСйсов, которая Π»Π΅ΠΆΠ°Ρ‚ Π² сСтСвом пространствС хоста ΠΈ Ρ‚Π°ΠΊΠΆΠ΅ ΠΎΡ‚ΠΎΠ±Ρ€Π°ΠΆΠ°ΡŽΡ‚ΡΡ Π² ifconfig. Π’Ρ‚ΠΎΡ€Ρ‹Π΅ Ρ‡Π°ΡΡ‚ΡŒ Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½ΠΎΠ³ΠΎ интСрфСйса находится Π²Π½ΡƒΡ‚Ρ€ΠΈ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ².

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€ iptables: sudo iptables -nL -t nat

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.0.1.0/24          0.0.0.0/0
MASQUERADE  all  --  10.0.2.0/24          0.0.0.0/0
MASQUERADE  tcp  --  10.0.1.2             10.0.1.2             tcp dpt:9292

ΠŸΠ΅Ρ€Π²Ρ‹Π΅ ΠΏΡ€Π°Π²ΠΈΠ»Π° ΠΎΡ‚Π²Π΅Ρ‡Π°ΡŽΡ‚ Π·Π° выпуск Ρ‚Ρ€Π°Ρ„ΠΈΠΊΠ° ΠΈΠ· ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ² Π½Π°Ρ€ΡƒΠΆΡƒ.

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:9292 to:10.0.1.2:9292

ПослСдняя строка ΠΏΡ€ΠΎΠΊΠΈΠ΄Ρ‹Π²Π°Π΅Ρ‚ ΠΏΠΎΡ€Ρ‚ 9292 Π²Π½ΡƒΡ‚Ρ€ΡŒ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°.

Docker-compose

  • Π€Π°ΠΉΠ»Ρƒ ./src/docker-compose.yml трСбуСтся пСрСмСнная окруТСния USERNAME: export USERNAME=ozyab

МоТно Π²Ρ‹ΠΏΠΎΠ»Π½ΠΈΡ‚ΡŒ: docker-compose up -d

  • Π˜Π·ΠΌΠ΅Π½ΠΈΡ‚ΡŒ docker-compose ΠΏΠΎΠ΄ кСйс с мноТСством сСтСй, сСтСвых алиасов
  • Π€Π°ΠΉΠ» .env - ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½Ρ‹Π΅ для docker-compose.yml
  • Π‘Π°Π·ΠΎΠ²ΠΎΠ΅ имя создаСтся ΠΏΠΎ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΠ°ΠΏΠΊΠΈ, Π² ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΉ происходит запуск docker-compose.

Для задания Π±Π°Π·ΠΎΠ²ΠΎΠ³ΠΎ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π° Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ Π΄ΠΎΠ±Π°Π²ΠΈΡ‚ΡŒ ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½ΡƒΡŽ COMPOSE_PROJECT_NAME=dockermicroservices

Homework 14 (docker-3)

Build Status

Π Π°Π±ΠΎΡ‚Π° Π² ΠΏΠ°ΠΏΠΊΠ΅ src:

  • post-py - сСрвис ΠΎΡ‚Π²Π΅Ρ‡Π°ΡŽΡ‰ΠΈΠΉ Π·Π° написаниС постов
  • comment - сСрвис ΠΎΡ‚Π²Π΅Ρ‡Π°ΡŽΡ‰ΠΈΠΉ Π·Π° написаниС ΠΊΠΎΠΌΠΌΠ΅Π½Ρ‚Π°Ρ€ΠΈΠ΅Π²
  • ui - Π²Π΅Π±-интСрфСйс, Ρ€Π°Π±ΠΎΡ‚Π°ΡŽΡ‰ΠΈΠΉ с Π΄Ρ€ΡƒΠ³ΠΈΠΌΠΈ сСрвисами
  • Π‘Π±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·ΠΎΠ²:
docker build -t ozyab/post:1.0 ./post-py
docker build -t ozyab/comment:1.0 ./comment
docker build -t ozyab/ui:1.0 ./ui
  • ΠžΡ‚Π΄Π΅Π»ΡŒΠ½Π°Ρ bridge-ΡΠ΅Ρ‚ΡŒ для ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ², Ρ‚Π°ΠΊ ΠΊΠ°ΠΊ сСтСвыС алиасы Π½Π΅ Ρ€Π°Π±ΠΎΡ‚Π°ΡŽΡ‚ Π² сСти ΠΏΠΎ ΡƒΠΌΠΎΠ»Ρ‡Π°Π½ΠΈΡŽ: docker network create reddit

  • Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ² Π² этой сСти с сСтСвыми алиасами ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ²:

docker run -d --network=reddit --network-alias=post_db --network-alias=comment_db mongo:latest
docker run -d --network=reddit --network-alias=post ozyab/post:1.0
docker run -d --network=reddit --network-alias=comment ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:1.0
  • ΠžΡΡ‚Π°Π½ΠΎΠ²ΠΊΠ° всСх ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ²: docker kill $(docker ps -q)
  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Docker volume: docker volume create reddit_db
  • Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ² с docker volume:
docker run -d --network=reddit --network-alias=post_db --network-alias=comment_db -v reddit_db:/data/db mongo:latest
docker run -d --network=reddit --network-alias=post ozyab/post:1.0
docker run -d --network=reddit --network-alias=comment ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:2.0
  • ПослС пСрСзапуска информация остаСтся Π² Π±Π°Π·Π΅

Homework 13 (docker-2)

Build Status

  • Π Π°Π±ΠΎΡ‚Π° с docker-machine:

docker-machine create <имя> - созданиС docker-хоста

eval $(docker-machine env <имя>) - ΠΏΠ΅Ρ€Π΅ΠΌΠ΅ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅ Π½Π° docker-хост

eval $(docker-machine env --unset) - ΠΏΠ΅Ρ€Π΅ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅ Π½Π° Π»ΠΎΠΊΠ°Π»ΡŒΠ½Ρ‹ΠΉ docker

docker-machine rm <имя> - ΡƒΠ΄Π°Π»Π΅Π½ΠΈΠ΅ docker-хоста

  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker-host:
docker-machine create --driver google \
  --google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
  --google-machine-type n1-standard-1 \
  --google-zone europe-west1-b \
  docker-host
  • ПослС этого ΡƒΠ²ΠΈΠ΄Π΅Ρ‚ΡŒ созданный docker-host ΠΌΠΎΠΆΠ½ΠΎ, Π²Ρ‹ΠΏΠΎΠ»Π½ΠΈΠ²: docker-machine ls
  • Запустим htop Π² docker'Π΅: docker run --rm -ti tehbilly/htop

Π‘ΡƒΠ΄Π΅Ρ‚ Π²ΠΈΠ΄Π΅Π½ Ρ‚ΠΎΠ»ΡŒΠΊΠΎ процСсс htop.

Если Π²Ρ‹ΠΏΠΎΠ»Π½ΠΈΡ‚ΡŒ docker run --rm --pid host -ti tehbilly/htop, Ρ‚ΠΎ Π²ΠΈΠ΄Π½Ρ‹ Π±ΡƒΠ΄ΡƒΡ‚ всС процСссы Π½Π° хостовой машинС

  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½Ρ‹:

Dockerfile - тСкстовоС описаниС нашСго ΠΎΠ±Ρ€Π°Π·Π°

mongod.conf - ΠΏΠΎΠ΄Π³ΠΎΡ‚ΠΎΠ²Π»Π΅Π½Π½Ρ‹ΠΉ ΠΊΠΎΠ½Ρ„ΠΈΠ³ для mongodb

db_config - пСрСмСнная окруТСния со ссылкой Π½Π° mongodb

start.sh - скрипт запуска прилоТСния

  • Π‘Π±ΠΎΡ€ΠΊΠ° ΠΎΠ±Ρ€Π°Π·Π°: docker build -t reddit:latest .
  • Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°: docker run --name reddit -d --network=host reddit:latest
  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΏΡ€Π°Π²ΠΈΠ»ΠΎ Π½Π° входящий ΠΏΠΎΡ€Ρ‚ 9292:
gcloud compute firewall-rules create reddit-app \
  --allow tcp:9292 \
  --target-tags=docker-machine \
  --description="Allow PUMA connections" \
  --direction=INGRESS 
  • ΠšΠΎΠΌΠ°Π½Π΄Ρ‹ ΠΏΠΎ Ρ€Π°Π±ΠΎΡ‚Π΅ с ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ:

docker tag reddit:latest <login>/otus-reddit:1.0 - Π΄ΠΎΠ±Π°Π²ΠΈΡ‚ΡŒ тэг ΠΎΠ±Ρ€Π°Π·Ρƒ reddit

docker push <login>/otus-reddit:1.0 - ΠΎΡ‚ΠΏΡ€Π°Π²ΠΊΠ° ΠΎΠ±Ρ€Π°Π·Π° Π² registry

docker logs reddit -f - просмотр Π»ΠΎΠ³ΠΎΠ²

docker inspect <login>/otus-reddit:1.0 - просмотр ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ ΠΎΠ± ΠΎΠ±Ρ€Π°Π·Π΅

docker inspect <login>/otus-reddit:1.0 -f '{{.ContainerConfig.Cmd}}' - просмотр Ρ‚ΠΎΠ»ΡŒΠΊΠΎ ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½Π½ΠΎΠΉ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΈ ΠΎ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π΅

docker diff reddit - просмотр ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ, ΠΏΡ€ΠΎΠΈΠ·ΠΎΡˆΠ΅Π΄Π½ΠΈΡ… Π² Ρ„Π°ΠΉΠ»ΠΎΠ²ΠΎΠΉ систСмС Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½ΠΎΠ³ΠΎ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°

Homework 12 (docker-1)

Build Status

  • Π”ΠΎΠ±Π°Π²Π»Π΅Π½ Ρ„Π°ΠΉΠ» шаблона PR .github/PULL_REQUEST_TEMPLATE
  • Π˜Π½Ρ‚Π΅Π³Ρ€Π°Ρ†ΠΈΡ со slack выполняСтся ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ /github subscribe Otus-DevOps-2018-09/ozyab09_microservices
  • Запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°: docker run hello-world
  • Бписок Π·Π°ΠΏΡƒΡ‰Π΅Π½Π½Ρ‹Ρ… ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ²: docker ps
  • Бписок всСх ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΎΠ²: docker ps -a
  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΈ запуск ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°: docker run -it ubuntu:16.04 /bin/bash
  • Команда run создаСт ΠΈ запускаСт ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ ΠΈΠ· image
  • start запускаСт остановлСнный созданный Ρ€Π°Π½Π΅Π΅ ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€
  • attach подсоСдиняСт Ρ‚Π΅Ρ€ΠΌΠΈΠ½Π°Π» ΠΊ созданному ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Ρƒ
  • docker system df ΠΎΡ‚ΠΎΠ±Ρ€Π°ΠΆΠ°Π΅Ρ‚ сколько дискового пространства занято ΠΎΠ±Ρ€Π°Π·Π°ΠΌΠΈ, ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°ΠΌΠΈ ΠΈ volume'Π°ΠΌΠΈ
  • Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΎΠ±Ρ€Π°Π·Π° ΠΈΠ· ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€Π°: docker commit <container_id> username/ubuntu-tmp-file

About

ozyab09 microservices repository

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Ruby 47.4%
  • HTML 18.4%
  • Smarty 16.7%
  • Python 14.2%
  • Dockerfile 2.4%
  • Shell 0.9%