ozyab09 microservices repository
Π£ Π½Π°Ρ Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±ΡΡΡ ΡΠ°Π·Π²Π΅ΡΠ½ΡΡΡ ΠΊΠ»Π°ΡΡΠ΅Ρ k8s:
- ΠΌΠΈΠ½ΠΈΠΌΡΠΌ 2 Π½ΠΎΠ΄Ρ g1-small (1,5 ΠΠ)
- ΠΌΠΈΠ½ΠΈΠΌΡΠΌ 1 Π½ΠΎΠ΄Π° n1-standard-2 (7,5 ΠΠ)
Π Π½Π°ΡΡΡΠΎΠΉΠΊΠ°Ρ :
- Stackdriver Logging - ΠΡΠΊΠ»ΡΡΠ΅Π½
- Stackdriver Monitoring - ΠΡΠΊΠ»ΡΡΠ΅Π½
- Π£ΡΡΠ°ΡΠ΅Π²ΡΠΈΠ΅ ΠΏΡΠ°Π²Π° Π΄ΠΎΡΡΡΠΏΠ° - ΠΠΊΠ»ΡΡΠ΅Π½ΠΎ
Π’.ΠΊ. ΡΠ΅ΡΠ²Π΅Ρ Π±ΡΠ» ΡΠΎΠ·Π΄Π°Π½ Π·Π°Π½ΠΎΠ²ΠΎ, ΠΏΠΎΠ²ΡΠΎΡΠ½ΠΎ ΡΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ Tiller: $ kubectl apply -f kubectl apply -f kubernetes/reddit/tiller.yml
ΠΠ°ΠΏΡΡΠΊ tiller-ΡΠ΅ΡΠ²Π΅ΡΠ°: $ helm init --service-account tiller
ΠΡΠΎΠ²Π΅ΡΠΊΠ°: $ kubectl get pods -n kube-system --selector app=helm
ΠΠ· Helm-ΡΠ°ΡΡΠ° ΡΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ ingress-ΠΊΠΎΠ½ΡΡΠΎΠ»Π»Π΅Ρ nginx: $ helm install stable/nginx-ingress --name nginx
ΠΠ°ΠΉΠ΄Π΅ΠΌ Π²ΡΠ΄Π°Π½Π½ΡΠΉ IP-Π°Π΄ΡΠ΅Ρ:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.47.240.1 <none> 443/TCP 6m
nginx-nginx-ingress-controller LoadBalancer 10.47.255.40 35.197.107.27 80:30949/TCP,443:31982/TCP 1m
nginx-nginx-ingress-default-backend ClusterIP 10.47.251.3 <none> 80/TCP 1m
ΠΠΎΠ±Π°Π²ΠΈΠΌ Π² /etc/hosts: # echo 35.197.107.27 reddit reddit-prometheus reddit-grafana reddit-non-prod production reddit-kibana staging prod > /etc/hosts
- Π Π°Π·Π²Π΅ΡΡΡΠ²Π°Π½ΠΈΠ΅ Prometheus Π² k8s
- ΠΠ°ΡΡΡΠΎΠΉΠΊΠ° Prometheus ΠΈ Grafana Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ
- ΠΠ°ΡΡΡΠΎΠΉΠΊΠ° EFK Π΄Π»Ρ ΡΠ±ΠΎΡΠ° Π»ΠΎΠ³ΠΎΠ²
ΠΡΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΈΠ½ΡΡΡΡΠΌΠ΅Π½ΡΡ:
- prometheus - ΡΠ΅ΡΠ²Π΅Ρ ΡΠ±ΠΎΡΠ° ΠΈ Π°Π»Π΅ΡΠ³ΠΈΠ½Π³Π°
- grafana - ΡΠ΅ΡΠ²Π΅Ρ Π²ΠΈΠ·ΡΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΌΠ΅ΡΡΠΈΠΊ
- alertmanager - ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ prometheus Π΄Π»Ρ Π°Π»Π΅ΡΡΠΈΠ½Π³Π°
- ΡΠ°Π·Π»ΠΈΡΠ½ΡΠ΅ ΡΠΊΡΠΏΠΎΡΡΠ΅ΡΡ Π΄Π»Ρ ΠΌΠ΅ΡΡΠΈΠΊ prometheus
Prometheus ΠΎΡΠ»ΠΈΡΠ½ΠΎ ΠΏΠΎΠ΄Ρ ΠΎΠ΄ΠΈΡ Π΄Π»Ρ ΡΠ°Π±ΠΎΡΡ Ρ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°ΠΌΠΈ ΠΈ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ½ΡΠΌ ΡΠ°Π·ΠΌΠ΅ΡΠ΅Π½ΠΈΠ΅ΠΌ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ².
Prometheus Π±ΡΠ΄Π΅ΠΌ ΡΡΠ°Π²ΠΈΡΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ Helm ΡΠ°ΡΡΠ°. ΠΠ°Π³ΡΡΠ·ΠΈΠΌ prometheus Π»ΠΎΠΊΠ°Π»ΡΠ½ΠΎ Π² Charts ΠΊΠ°ΡΠ°Π»ΠΎΠ³:
$ cd kubernetes/charts
$ helm fetch β-untar stable/prometheus
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π²Π½ΡΡΡΠΈ Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΠΈ ΡΠ°ΡΡΠ° ΡΠ°ΠΉΠ» custom_values.yml.
ΠΡΠ½ΠΎΠ²Π½ΡΠ΅ ΠΎΡΠ»ΠΈΡΠΈΡ ΠΎΡ values.yml:
- ΠΎΡΠΊΠ»ΡΡΠ΅Π½Π° ΡΠ°ΡΡΡ ΡΡΡΠ°Π½Π°Π²Π»ΠΈΠ²Π°Π΅ΠΌΡΡ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² (pushgateway, alertmanager, kube-state-metrics)
- Π²ΠΊΠ»ΡΡΠ΅Π½ΠΎ ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ IngressβΠ° Π΄Π»Ρ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ ΡΠ΅ΡΠ΅Π· nginx
- ΠΏΠΎΠΏΡΠ°Π²Π»Π΅Π½ endpoint Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ cadvisor
- ΡΠΌΠ΅Π½ΡΡΠ΅Π½ ΠΈΠ½ΡΠ΅ΡΠ²Π°Π» ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ (Ρ 1 ΠΌΠΈΠ½ΡΡΡ Π΄ΠΎ 30 ΡΠ΅ΠΊΡΠ½Π΄)
ΠΠ°ΠΏΡΡΡΠΈΠΌ Prometheus Π² k8s:
$ cd kubernetes/charsts/prometheus
$ helm upgrade prom . -f custom_values.yml --install`
ΠΠ΅ΡΠ΅ΠΉΠ΄Π΅ΠΌ ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ http://reddit-prometheus/ Π² ΡΠ°Π·Π΄Π΅Π» Targets.
Π£ Π½Π°Ρ ΡΠΆΠ΅ ΠΏΡΠΈΡΡΡΡΡΠ²ΡΠ΅Ρ ΡΡΠ΄ endpointβΠΎΠ² Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ
- ΠΌΠ΅ΡΡΠΈΠΊΠΈ API-ΡΠ΅ΡΠ²Π΅ΡΠ°
- ΠΌΠ΅ΡΡΠΈΠΊΠΈ Π½ΠΎΠ΄ Ρ cadvisorβΠΎΠ²
- ΡΠ°ΠΌ prometheus
ΠΡΠΌΠ΅ΡΠΈΠΌ, ΡΡΠΎ ΠΌΠΎΠΆΠ½ΠΎ ΡΠΎΠ±ΠΈΡΠ°ΡΡ ΠΌΠ΅ΡΡΠΈΠΊΠΈ cadvisorβΠ° (ΠΊΠΎΡΠΎΡΡΠΉ ΡΠΆΠ΅ ΡΠ²Π»ΡΠ΅ΡΡΡ ΡΠ°ΡΡΡΡ kubelet) ΡΠ΅ΡΠ΅Π· ΠΏΡΠΎΠΊΡΠΈΡΡΡΡΠΈΠΉ Π·Π°ΠΏΡΠΎΡ Π² kube-api-server.
ΠΡΠ»ΠΈ Π·Π°ΠΉΡΠΈ ΠΏΠΎ ssh Π½Π° Π»ΡΠ±ΡΡ ΠΈΠ· ΠΌΠ°ΡΠΈΠ½ ΠΊΠ»Π°ΡΡΠ΅ΡΠ° ΠΈ Π·Π°ΠΏΡΠΎΡΠΈΡΡ $ curl http://localhost:4194/metrics
ΡΠΎ ΠΏΠΎΠ»ΡΡΠΈΠΌ ΡΠ΅ ΠΆΠ΅ ΠΌΠ΅ΡΡΠΈΠΊΠΈ Ρ kubelet Π½Π°ΠΏΡΡΠΌΡΡ
ΠΠΎ Π²Π°ΡΠΈΠ°Π½Ρ Ρ kube-api ΠΏΡΠ΅Π΄ΠΏΠΎΡΡΠΈΡΠ΅Π»ΡΠ½Π΅ΠΉ, Ρ.ΠΊ. ΡΡΠΎΡ ΡΡΠ°ΡΠΈΠΊ ΡΠΈΡΡΡΠ΅ΡΡΡ TLS ΠΈ ΡΡΠ΅Π±ΡΠ΅Ρ Π°ΡΡΠ΅Π½ΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ.
Π’Π°ΡΠ³Π΅ΡΡ Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ Π½Π°ΠΉΠ΄Π΅Π½Ρ Ρ ΠΏΠΎΠΌΠΎΡΡΡ service discovery (SD), Π½Π°ΡΡΡΠΎΠ΅Π½Π½ΠΎΠ³ΠΎ Π² ΠΊΠΎΠ½ΡΠΈΠ³Π΅ prometheus (Π»Π΅ΠΆΠΈΡ Π² custom-values.yml):
prometheus.yml:
...
- job_name: 'kubernetes-apiservers' # kubernetes-apiservers (1/1 up)
...
- job_name: 'kubernetes-nodes' # kubernetes-apiservers (3/3 up)
kubernetes_sd_configs: # ΠΠ°ΡΡΡΠΎΠΉΠΊΠΈ Service Discovery (Π΄Π»Ρ ΠΏΠΎΠΈΡΠΊΠ° target'ΠΎΠ²)
- role: node
scheme: https # ΠΠ°ΡΡΡΠΎΠΉΠΊΠΈ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ ΠΊ targetβΠ°ΠΌ (Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ)
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs: # ΠΠ°ΡΡΡΠΎΠΉΠΊΠΈ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
ΠΌΠ΅ΡΠΎΠΊ, ΡΠΈΠ»ΡΡΡΠ°ΡΠΈΡ Π½Π°ΠΉΠ΄Π΅Π½Π½ΡΡ
ΡΠ°ΡΠ³Π΅ΡΠΎΠ², ΠΈΡ
ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠ΅
ΠΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ SD Π² kubernetes ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π½Π°ΠΌ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ½ΠΎ ΠΌΠ΅Π½ΡΡΡ ΠΊΠ»Π°ΡΡΠ΅Ρ (ΠΊΠ°ΠΊ ΡΠ°ΠΌΠΈ Ρ ΠΎΡΡΡ, ΡΠ°ΠΊ ΠΈ ΡΠ΅ΡΠ²ΠΈΡΡ ΠΈ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ) Π¦Π΅Π»ΠΈ Π΄Π»Ρ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π° Π½Π°Ρ ΠΎΠ΄ΠΈΠΌ c ΠΏΠΎΠΌΠΎΡΡΡ Π·Π°ΠΏΡΠΎΡΠΎΠ² ΠΊ k8s API:
custom-values.yml
...
prometheus.yml:
...
scrape_configs:
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
Role ΠΎΠ±ΡΠ΅ΠΊΡ, ΠΊΠΎΡΠΎΡΡΠΉ Π½ΡΠΆΠ½ΠΎ Π½Π°ΠΉΡΠΈ:
- node
- endpoints
- pod
- service
- ingress
Π’.ΠΊ. ΡΠ±ΠΎΡ ΠΌΠ΅ΡΡΠΈΠΊ prometheus ΠΎΡΡΡΠ΅ΡΡΠ²Π»ΡΠ΅ΡΡΡ ΠΏΠΎΠ²Π΅ΡΡ ΡΡΠ°Π½Π΄Π°ΡΡΠ½ΠΎΠ³ΠΎ HTTP-ΠΏΡΠΎΡΠΎΠΊΠΎΠ»Π°, ΡΠΎ ΠΌΠΎΠ³ΡΡ ΠΏΠΎΠ½Π°Π΄ΠΎΠ±ΠΈΡΡΡ Π΄ΠΎΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»ΡΠ½ΡΠ΅ Π½Π°ΡΡΡΠΎΠΉΠΊΠΈ Π΄Π»Ρ Π±Π΅Π·ΠΎΠΏΠ°ΡΠ½ΠΎΠ³ΠΎ Π΄ΠΎΡΡΡΠΏΠ° ΠΊ ΠΌΠ΅ΡΡΠΈΠΊΠ°ΠΌ.
ΠΠΈΠΆΠ΅ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½Ρ Π½Π°ΡΡΡΠΎΠΉΠΊΠΈ Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ ΠΈΠ· k8s API
custom-values.yml
...
scheme: https # Π‘Ρ
Π΅ΠΌΠ° ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ - http (default) ΠΈΠ»ΠΈ https
tls_config: # ΠΠΎΠ½ΡΠΈΠ³ TLS - ΠΊΠΎΡΠ΅Π²ΠΎΠΉ ΡΠ΅ΡΡΠΈΡΠΈΠΊΠ°Ρ ΡΠ΅ΡΠ²Π΅ΡΠ° Π΄Π»Ρ ΠΏΡΠΎΠ²Π΅ΡΠΊΠΈ Π΄ΠΎΡΡΠΎΠ²Π΅ΡΠ½ΠΎΡΡΠΈ ΡΠ΅ΡΠ²Π΅ΡΠ°
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token # Π’ΠΎΠΊΠ΅Π½ Π΄Π»Ρ Π°ΡΡΠ΅Π½ΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ Π½Π° ΡΠ΅ΡΠ²Π΅ΡΠ΅
custom-values.yml
...
#Kubernetes nodes
relabel_configs: # ΠΏΡΠ΅ΠΎΠ±ΡΠ°Π·ΠΎΠ²Π°ΡΡ Π²ΡΠ΅ k8s Π»Π΅ΠΉΠ±Π»Ρ ΡΠ°ΡΠ³Π΅ΡΠ° Π² Π»Π΅ΠΉΠ±Π»Ρ prometheus
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__ # ΠΠΎΠΌΠ΅Π½ΡΡΡ Π»Π΅ΠΉΠ±Π» Π΄Π»Ρ Π°Π΄ΡΠ΅ΡΠ° ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__ # ΠΠΎΠΌΠ΅Π½ΡΡΡ Π»Π΅ΠΉΠ±Π» Π΄Π»Ρ ΠΏΡΡΠΈ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
ΠΡΠ΅ Π½Π°ΠΉΠ΄Π΅Π½Π½ΡΠ΅ Π½Π° ΡΠ½Π΄ΠΏΠΎΠΈΠ½ΡΠ°Ρ ΠΌΠ΅ΡΡΠΈΠΊΠΈ ΡΡΠ°Π·Ρ ΠΆΠ΅ ΠΎΡΠΎΠ±ΡΠ°Π·ΡΡΡΡ Π² ΡΠΏΠΈΡΠΊΠ΅ (Π²ΠΊΠ»Π°Π΄ΠΊΠ° Graph). ΠΠ΅ΡΡΠΈΠΊΠΈ Cadvisor Π½Π°ΡΠΈΠ½Π°ΡΡΡΡ Ρ container_.
Cadvisor ΡΠΎΠ±ΠΈΡΠ°Π΅Ρ Π»ΠΈΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΎ ΠΏΠΎΡΡΠ΅Π±Π»Π΅Π½ΠΈΠΈ ΡΠ΅ΡΡΡΡΠΎΠ² ΠΈ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ ΠΎΡΠ΄Π΅Π»ΡΠ½ΡΡ docker-ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ². ΠΡΠΈ ΡΡΠΎΠΌ ΠΎΠ½ Π½ΠΈΡΠ΅Π³ΠΎ Π½Π΅ Π·Π½Π°Π΅Ρ ΠΎ ΡΡΡΠ½ΠΎΡΡΡΡ k8s (Π΄Π΅ΠΏΠ»ΠΎΠΉΠΌΠ΅Π½ΡΡ, ΡΠ΅ΠΏΠ»ΠΈΠΊΠ°ΡΠ΅ΡΡ, ...).
ΠΠ»Ρ ΡΠ±ΠΎΡΠ° ΡΡΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ Π±ΡΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΡΠ΅ΡΠ²ΠΈΡ kube-state-metrics. ΠΠ½ Π²Ρ ΠΎΠ΄ΠΈΡ Π² ΡΠ°ΡΡ Prometheus. ΠΠΊΠ»ΡΡΠΈΠΌ Π΅Π³ΠΎ.
prometheus/custom_values.yml
...
kubeStateMetrics:
## If false, kube-state-metrics will not be installed
##
enabled: true
ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅Π»ΠΈΠ·: $ helm upgrade prom . -f custom_values.yml --install
ΠΠΎ Π°Π½Π°Π»ΠΎΠ³ΠΈΠΈ Ρ kube_state_metrics Π²ΠΊΠ»ΡΡΠΈΠΌ (enabled: true) ΠΏΠΎΠ΄Ρ node-exporter Π² custom_values.yml:
prometheus/custom_values.yml
...
nodeExporter:
enabled: true
ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅Π»ΠΈΠ·: $ helm upgrade prom . -f custom_values.yml --install
ΠΡΠΎΠ²Π΅ΡΠΈΠΌ, ΡΡΠΎ ΠΌΠ΅ΡΡΠΈΠΊΠΈ Π½Π°ΡΠ°Π»ΠΈ ΡΠΎΠ±ΠΈΡΠ°ΡΡΡΡ Ρ Π½ΠΈΡ
.
ΠΠ°ΠΏΡΡΡΠΈΠΌ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΈΠ· helm ΡΠ°ΡΡΠ° reddit:
$ cd kubernetes/charts
$ helm upgrade reddit-test ./reddit --install
$ helm upgrade production --namespace production ./reddit --install
$ helm upgrade staging --namespace staging ./reddit β-install
Π Π°Π½ΡΡΠ΅ ΠΌΡ "Ρ Π°ΡΠ΄ΠΊΠΎΠ΄ΠΈΠ»ΠΈ" Π°Π΄ΡΠ΅ΡΠ°/dns-ΠΈΠΌΠ΅Π½Π° Π½Π°ΡΠΈΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ Ρ Π½ΠΈΡ .
prometheus.yml
- job_name: 'ui'
static_configs:
- targets:
- 'ui:9292'
- job_name: 'comment'
static_configs:
- targets:
- 'comment:9292'
Π’Π΅ΠΏΠ΅ΡΡ ΠΌΡ ΠΌΠΎΠΆΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΌΠ΅Ρ Π°Π½ΠΈΠ·ΠΌ ServiceDiscovery Π΄Π»Ρ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ, Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ Π² k8s.
ΠΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ Π±ΡΠ΄Π΅ΠΌ ΠΈΡΠΊΠ°ΡΡ ΡΠ°ΠΊ ΠΆΠ΅, ΠΊΠ°ΠΊ ΠΈ ΡΠ»ΡΠΆΠ΅Π±Π½ΡΠ΅ ΡΠ΅ΡΠ²ΠΈΡΡ k8s. ΠΠΎΠ΄Π΅ΡΠ½ΠΈΠ·ΠΈΡΡΠ΅ΠΌ ΠΊΠΎΠ½ΡΠΈΠ³ prometheus:
custom_values.yml
- job_name: 'reddit-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_app]
action: keep #ΠΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌ Π΄Π΅ΠΉΡΡΠ²ΠΈΠ΅ keep, ΡΡΠΎΠ±Ρ ΠΎΡΡΠ°Π²ΠΈΡΡ ΡΠΎΠ»ΡΠΊΠΎ ΡΠ½Π΄ΠΏΠΎΠΈΠ½ΡΡ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² Ρ ΠΌΠ΅ΡΠΊΠ°ΠΌΠΈ βapp=redditβ
regex: reddit
ΠΠ±Π½ΠΎΠ²ΠΈΡΠ΅ ΡΠ΅Π»ΠΈΠ· prometheus: $ helm upgrade prom . -f custom_values.yml --install
ΠΡ ΠΏΠΎΠ»ΡΡΠΈΠ»ΠΈ ΡΠ½Π΄ΠΏΠΎΠΈΠ½ΡΡ, Π½ΠΎ ΡΡΠΎ ΡΡΠΎ Π·Π° ΠΏΠΎΠ΄Ρ ΠΌΡ Π½Π΅ Π·Π½Π°Π΅ΠΌ. ΠΠΎΠ±Π°Π²ΠΈΠΌ ΠΌΠ΅ΡΠΊΠΈ k8s. ΠΡΠ΅ Π»Π΅ΠΉΠ±Π»Ρ ΠΈ Π°Π½Π½ΠΎΡΠ°ΡΠΈΠΈ k8s ΠΈΠ·Π½Π°ΡΠ°Π»ΡΠ½ΠΎ ΠΎΡΠΎΠ±ΡΠ°ΠΆΠ°ΡΡΡΡ Π² prometheus Π² ΡΠΎΡΠΌΠ°ΡΠ΅:
__meta_kubernetes_service_label_labelname
__meta_kubernetes_service_annotation_annotationname
custom_values.yml
relabel_configs:
- action: labelmap #ΠΡΠΎΠ±ΡΠ°Π·ΠΈΡΡ Π²ΡΠ΅ ΡΠΎΠ²ΠΏΠ°Π΄Π΅Π½ΠΈΡ Π³ΡΡΠΏΠΏ ΠΈΠ· regex Π² labelβΡ Prometheus
regex: __meta_kubernetes_service_label_(.+)
ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅Π»ΠΈΠ· prometheus: $ helm upgrade prom . -f custom_values.yml --install
Π’Π΅ΠΏΠ΅ΡΡ ΠΌΡ Π²ΠΈΠ΄ΠΈΠΌ Π»Π΅ΠΉΠ±Π»Ρ k8s, ΠΏΡΠΈΡΠ²ΠΎΠ΅Π½Π½ΡΠ΅ PODβΠ°ΠΌ:
ΠΠΎΠ±Π°Π²ΠΈΠΌ Π΅ΡΠ΅ labelβΡ Π΄Π»Ρ prometheus ΠΈ ΠΎΠ±Π½ΠΎΠ²ΠΈΠΌ helm-ΡΠ΅Π»ΠΈΠ·. Π’.ΠΊ. ΠΌΠ΅ΡΠΊΠΈ Π²ΠΈΠ΄Π° _meta* Π½Π΅ ΠΏΡΠ±Π»ΠΈΠΊΡΡΡΡΡ, ΡΠΎ Π½ΡΠΆΠ½ΠΎ ΡΠΎΠ·Π΄Π°ΡΡ ΡΠ²ΠΎΠΈ, ΠΏΠ΅ΡΠ΅Π½Π΅ΡΡ Π² Π½ΠΈΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ:
custom_values.yml
...
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
...
ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅Π»ΠΈΠ· prometheus:
Π‘Π΅ΠΉΡΠ°Ρ ΠΌΡ ΡΠΎΠ±ΠΈΡΠ°Π΅ΠΌ ΠΌΠ΅ΡΡΠΈΠΊΠΈ ΡΠΎ Π²ΡΠ΅Ρ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² redditβΠ° Π² 1 Π³ΡΡΠΏΠΏΠ΅ target-ΠΎΠ². ΠΡ ΠΌΠΎΠΆΠ΅ΠΌ ΠΎΡΠ΄Π΅Π»ΠΈΡΡ target-Ρ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ Π΄ΡΡΠ³ ΠΎΡ Π΄ΡΡΠ³Π° (ΠΏΠΎ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡΠΌ, ΠΏΠΎ ΡΠ°ΠΌΠΈΠΌ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠ°ΠΌ), Π° ΡΠ°ΠΊΠΆΠ΅ Π²ΡΠΊΠ»ΡΡΠ°ΡΡ ΠΈ Π²ΠΊΠ»ΡΡΠ°ΡΡ ΠΎΠΏΡΠΈΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π° Π΄Π»Ρ Π½ΠΈΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ Π²ΡΠ΅ ΡΠ΅Ρ ΠΆΠ΅ labelΠΎΠ². ΠΠ°ΠΏΡΠΈΠΌΠ΅Ρ, Π΄ΠΎΠ±Π°Π²ΠΈΠΌ Π² ΠΊΠΎΠ½ΡΠΈΠ³ Π΅ΡΠ΅ 1 job:
custom_values.yml
...
- job_name: 'reddit-production'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_namespace] # ΠΠ»Ρ ΡΠ°Π·Π½ΡΡ
Π»Π΅ΠΉΠ±Π»ΠΎΠ²
action: keep
regex: reddit;(production|staging)+ # ΡΠ°Π·Π½ΡΠ΅ ΡΠ΅Π³Π΅ΠΊΡΠΏΡ
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
...
ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅Π»ΠΈΠ· prometheus ΠΈ ΠΏΠΎΡΠΌΠΎΡΡΠΈΠΌ:
ΠΠ΅ΡΡΠΈΠΊΠΈ Π±ΡΠ΄ΡΡ ΠΎΡΠΎΠ±ΡΠ°ΠΆΠ°ΡΡΡΡ Π΄Π»Ρ Π²ΡΠ΅Ρ
ΠΈΠ½ΡΡΠ°Π½ΡΠΎΠ² ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ:
Π Π°Π·ΠΎΠ±ΡΠ΅ΠΌ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΡ jobβΠ° reddit-endpoints
ΡΠ°ΠΊ, ΡΡΠΎΠ±Ρ Π±ΡΠ»ΠΎ 3 jobβΠ° Π΄Π»Ρ ΠΊΠ°ΠΆΠ΄ΠΎΠΉ ΠΈΠ· ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ (post-endpoints, commenten-dpoints, ui-endpoints), Π° reddit-endpoints ΡΠ±Π΅ΡΠ΅ΠΌ:
custom_values.yml
...
- job_name: 'post-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_component,__meta_kubernetes_namespace]
action: keep
regex: post;(production|staging)+
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'ui-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_component,__meta_kubernetes_namespace]
action: keep
regex: ui;(production|staging)+
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'comment-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_component,__meta_kubernetes_namespace]
action: keep
regex: comment;(production|staging)+
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
...
ΠΠΎΡΡΠ°Π²ΠΈΠΌ ΡΠ°ΠΊΠΆΠ΅ grafana Ρ ΠΏΠΎΠΌΠΎΡΡΡ helm:
helm upgrade --install grafana stable/grafana --set "adminPassword=admin" \
--set "service.type=NodePort" \
--set "ingress.enabled=true" \
--set "ingress.hosts={reddit-grafana}"
ΠΠ΅ΡΠ΅ΠΉΠ΄Π΅ΠΌ Π½Π° http://reddit-grafana/
ΠΠΎΠ±Π°Π²ΠΈΠΌ prometheus data-source
ΠΠ΄ΡΠ΅Ρ Π½Π°ΠΉΠ΄Π΅ΠΌ ΠΈΠ· ΠΈΠΌΠ΅Π½ΠΈ ΡΠ΅ΡΠ²ΠΈΡΠ° prometheus ΡΠ΅ΡΠ²Π΅ΡΠ°
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana-grafana NodePort 10.11.252.216 <none> 80:31886/TCP 22m
kubernetes ClusterIP 10.11.240.1 <none> 443/TCP 22d
nginx-nginx-ingress-controller LoadBalancer 10.11.243.76 104.154.94.52 80:32293/TCP,443:30193/TCP 7h
nginx-nginx-ingress-default-backend ClusterIP 10.11.248.132 <none> 80/TCP 7h
prom-prometheus-server LoadBalancer 10.11.247.75 35.224.121.85 80:30282/TCP 4d
ΠΠΎΠ±Π°Π²ΠΈΠΌ ΡΠ°ΠΌΡΠΉ ΡΠ°ΡΠΏΡΠΎΡΡΡΠ°Π½Π΅Π½Π½ΡΠΉ dashboard Π΄Π»Ρ ΠΎΡΡΠ»Π΅ΠΆΠΈΠ²Π°Π½ΠΈΡ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ ΡΠ΅ΡΡΡΡΠΎΠ² k8s. ΠΡΠ±Π΅ΡΠ΅ΠΌ datasource:
ΠΠΎΠ±Π°Π²ΠΈΠΌ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΡΠ΅ Π΄Π°ΡΠ±ΠΎΡΠ΄Ρ, ΡΠΎΠ·Π΄Π°Π½Π½ΡΠ΅ ΡΠ°Π½Π΅Π΅ (Π² ΠΠ ΠΏΠΎ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Ρ). ΠΠ½ΠΈ Π΄ΠΎΠ»ΠΆΠ½Ρ ΡΠ°ΠΊΠΆΠ΅ ΡΡΠΏΠ΅ΡΠ½ΠΎ ΠΎΡΠΎΠ±ΡΠ°Π·ΠΈΡΡ Π΄Π°Π½Π½ΡΠ΅:
Π ΡΠ΅ΠΊΡΡΠΈΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ Π½Π° Π³ΡΠ°ΡΠΈΠΊΠ°Ρ , ΠΎΡΠ½ΠΎΡΡΡΠΈΡ ΡΡ ΠΊ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ, ΠΎΠ΄Π½ΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎ ΠΎΡΠΎΠ±ΡΠ°ΠΆΠ΅Π½Ρ Π·Π½Π°ΡΠ΅Π½ΠΈΡ ΠΌΠ΅ΡΡΠΈΠΊ ΡΠΎ Π²ΡΠ΅Ρ ΠΈΡΡΠΎΡΠ½ΠΈΠΊΠΎΠ² ΡΡΠ°Π·Ρ. ΠΡΠΈ Π±ΠΎΠ»ΡΡΠΎΠΌ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²Π΅ ΡΡΠ΅Π΄ ΠΈ ΠΏΡΠΈ ΠΈΡ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ½ΠΎΠΌ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΈ ΠΈΠΌΠ΅Π΅Ρ ΡΠΌΡΡΠ» ΡΠ΄Π΅Π»Π°ΡΡ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ½ΠΎΠΉ ΠΈ ΡΠ΄ΠΎΠ±Π½ΠΎ Π½Π°ΡΡΡΠΎΠΉΠΊΡ Π½Π°ΡΠΈΡ Π΄Π°ΡΠ±ΠΎΡΠ΄ΠΎΠ² Π² Grafana.
Π‘Π΄Π΅Π»Π°ΡΡ ΡΡΠΎ ΠΌΠΎΠΆΠ½ΠΎ Π² Π½Π°ΡΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅ Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΠΌΠ΅Ρ
Π°Π½ΠΈΠ·ΠΌΠ° templatingβΠ°
Π£ Π½Π°Ρ ΠΏΠΎΡΠ²ΠΈΠ»ΡΡ ΡΠΏΠΈΡΠΎΠΊ ΡΠΎ Π·Π½Π°ΡΠ΅Π½ΠΈΡΠΌΠΈ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ namespace. ΠΠΎΠΊΠ° ΡΡΠΎ ΠΎΠ½ Π±Π΅ΡΠΏΠΎΠ»Π΅Π·Π΅Π½. Π§ΡΠΎΠ±Ρ ΠΈΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈΠΌΠ΅Π»ΠΎ ΡΡΡΠ΅ΠΊΡ Π½ΡΠΆΠ½ΠΎ ΡΠ°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡΠΎΠ²Π°ΡΡ Π·Π°ΠΏΡΠΎΡΡ ΠΊ Prometheus.
Π’Π΅ΠΏΠ΅ΡΡ ΠΌΡ ΠΌΠΎΠΆΠ΅ΠΌ Π½Π°ΡΡΡΠ°ΠΈΠ²Π°ΡΡ ΠΎΠ±ΡΠΈΠ΅ ΡΠ°Π±Π»ΠΎΠ½Ρ Π³ΡΠ°ΡΠΈΠΊΠΎΠ² ΠΈ Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ
ΠΌΠ΅Π½ΡΡΡ Π² Π½ΠΈΡ
Π½ΡΠΆΠ½ΡΠ΅ Π½Π°ΠΌ ΠΏΠΎΠ»Ρ (Π² Π½Π°ΡΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅ ΡΡΠΎ namespace).
ΠΠ°ΡΠ°ΠΌΠ΅ΡΡΠΈΠ·ΡΠ΅ΠΌ Π²ΡΠ΅ DashboardβΡ, ΠΎΡΡΠ°ΠΆΠ°ΡΡΠΈΠ΅ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡ ΡΠ°Π±ΠΎΡΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ (ΡΠΎΠ·Π΄Π°Π½Π½ΡΠ΅ Π² ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠΈΡ ΠΠ) reddit Π΄Π»Ρ ΡΠ°Π±ΠΎΡΡ Ρ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΈΠΌΠΈ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡΠΌΠΈ (Π½Π΅ΠΉΠΌΡΠΏΠ΅ΠΉΡΠ°ΠΌΠΈ).
ΠΠΌΠΏΠΎΡΡΠΈΡΡΠ΅ΠΌ Π΄Π°ΡΠ±ΠΎΠ°ΡΠ΄: https://grafana.com/dashboards/741.
ΠΠ° ΡΡΠΎΠΌ Π³ΡΠ°ΡΠΈΠΊΠ΅ ΠΎΠ΄Π½ΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ ΠΌΠ΅ΡΡΠΈΠΊΠΈ ΠΈ ΡΠ°Π±Π»ΠΎΠ½Ρ ΠΈΠ· cAdvisor, ΠΈ ΠΈΠ· kube-state-metrics Π΄Π»Ρ ΠΎΡΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ ΡΠ²ΠΎΠ΄Π½ΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ ΠΏΠΎ Π΄Π΅ΠΏΠ»ΠΎΠΉΠΌΠ΅Π½ΡΠ°ΠΌ.
Helm - ΠΏΠ°ΠΊΠ΅ΡΠ½ΡΠΉ ΠΌΠ΅Π½Π΅Π΄ΠΆΠ΅Ρ Π΄Π»Ρ Kubernetes. Π‘ Π΅Π³ΠΎ ΠΏΠΎΠΌΠΎΡΡΡ ΠΌΡ Π±ΡΠ΄Π΅ΠΌ:
- Π‘ΡΠ°Π½Π΄Π°ΡΡΠΈΠ·ΠΈΡΠΎΠ²Π°ΡΡ ΠΏΠΎΡΡΠ°Π²ΠΊΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ Π² Kubernetes
- ΠΠ΅ΠΊΠ»Π°ΡΠΈΡΠΎΠ²Π°ΡΡ ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ
- ΠΠ΅ΠΏΠ»ΠΎΠΈΡΡ Π½ΠΎΠ²ΡΠ΅ Π²Π΅ΡΡΠΈΠΈ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ
Helm - ΠΊΠ»ΠΈΠ΅Π½Ρ-ΡΠ΅ΡΠ²Π΅ΡΠ½ΠΎΠ΅ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅. Π£ΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ Π΅Π³ΠΎ ΠΊΠ»ΠΈΠ΅Π½ΡΡΠΊΡΡ ΡΠ°ΡΡΡ - ΠΊΠΎΠ½ΡΠΎΠ»ΡΠ½ΡΠΉ ΠΊΠ»ΠΈΠ΅Π½Ρ Helm:
$ brew install kubernetes-helm
Helm ΡΠΈΡΠ°Π΅Ρ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΡ kubectl (~/.kube/config) ΠΈ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅Ρ ΡΠ΅ΠΊΡΡΠΈΠΉ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡ (ΠΊΠ»Π°ΡΡΠ΅Ρ, ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ, namespace). ΠΠ»Ρ ΡΠΌΠ΅Π½Ρ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°: $ kubectl config set-context
, Π»ΠΈΠ±ΠΎ Π΄ΠΎΠ³ΡΡΠ·ΠΊΠ° helmβΡ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ config-ΡΠ°ΠΉΠ»Π° Ρ ΡΠ»Π°Π³ΠΎΠΌ --kube-context
.
Π£ΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅ΡΠ²Π΅ΡΠ½ΡΡ ΡΠ°ΡΡΡ HelmβΠ° - Tiller. Tiller - ΡΡΠΎ Π°Π΄Π΄ΠΎΠ½ Kubernetes, Ρ.Π΅. Pod, ΠΊΠΎΡΠΎΡΡΠΉ ΠΎΠ±ΡΠ°Π΅ΡΡΡ Ρ API Kubernetes. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ ΠΏΠΎΠ½Π°Π΄ΠΎΠ±ΠΈΡΡΡ Π΅ΠΌΡ Π²ΡΠ΄Π°ΡΡ ServiceAccount ΠΈ Π½Π°Π·Π½Π°ΡΠΈΡΡ ΡΠΎΠ»ΠΈ RBAC, Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΡΠ΅ Π΄Π»Ρ ΡΠ°Π±ΠΎΡΡ.
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ tiller.yml ΠΈ ΠΏΠΎΠΌΠ΅ΡΡΠΈΠΌ Π² Π½Π΅Π³ΠΎ ΠΌΠ°Π½ΠΈΡΠ΅ΡΡ:
tiller.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
ΠΡΠΈΠΌΠ΅Π½ΠΈΠΌ: $ kubectl apply -f tiller.yml
ΠΠ°ΠΏΡΡΠΊ tiller-ΡΠ΅ΡΠ²Π΅ΡΠ°: $ helm init --service-account tiller
ΠΡΠΎΠ²Π΅ΡΠΊΠ°: $ kubectl get pods -n kube-system --selector app=helm
output
NAME READY STATUS RESTARTS AGE
tiller-deploy-689d79895f-tmhlp 1/1 Running 0 51s
Chart - ΡΡΠΎ ΠΏΠ°ΠΊΠ΅Ρ Π² Helm.
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΡ Charts Π² ΠΏΠ°ΠΏΠΊΠ΅ kubernetes ΡΠΎ ΡΠ»Π΅Π΄ΡΡΡΠ΅ΠΉ ΡΡΡΡΠΊΡΡΡΠΎΠΉ Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΠΉ:
ββ Charts
ββ comment
ββ post
ββ reddit
ββ ui
ΠΠ°ΡΠ½Π΅ΠΌ ΡΠ°Π·ΡΠ°Π±ΠΎΡΠΊΡ ChartβΠ° Π΄Π»Ρ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ ui ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ.
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΡΠ°ΠΉΠ»-ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ chartβΠ°:
ui/Chart.yaml
---
name: ui
version: 1.0.0
description: OTUS reddit application UI
maintainers:
- name: Vyacheslav Egorov
email: 692677@mail.ru
appVersion: 1.0
ΠΠ½Π°ΡΠΈΠΌΡΠΌΠΈ ΡΠ²Π»ΡΡΡΡΡ ΠΏΠΎΠ»Ρ name ΠΈ version. ΠΡ Π½ΠΈΡ Π·Π°Π²ΠΈΡΠΈΡ ΡΠ°Π±ΠΎΡΠ° HelmβΠ° Ρ ChartβΠΎΠΌ. ΠΡΡΠ°Π»ΡΠ½ΠΎΠ΅ - ΠΎΠΏΠΈΡΠ°Π½ΠΈΡ.
ΠΡΠ½ΠΎΠ²Π½ΡΠΌ ΡΠΎΠ΄Π΅ΡΠΆΠΈΠΌΡΠΌ ChartβΠΎΠ² ΡΠ²Π»ΡΡΡΡΡ ΡΠ°Π±Π»ΠΎΠ½Ρ ΠΌΠ°Π½ΠΈΡΠ΅ΡΡΠΎΠ² Kubernetes.
- Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΡ ui/templates
- ΠΠ΅ΡΠ΅Π½Π΅ΡΠ΅ΠΌ Π² Π½Π΅Ρ Π²ΡΠ΅ ΠΌΠ°Π½ΠΈΡΠ΅ΡΡΡ, ΡΠ°Π·ΡΠ°Π±ΠΎΡΠ°Π½Π½ΡΠ΅ ΡΠ°Π½Π΅Π΅ Π΄Π»Ρ ΡΠ΅ΡΠ²ΠΈΡΠ° ui (ui-service, ui-deployment, ui-ingress)
- ΠΠ΅ΡΠ΅ΠΈΠΌΠ΅Π½ΡΠ΅ΠΌ ΠΌΠ°Π½ΠΈΡΠ΅ΡΡΡ (ΡΠ±Π΅ΡΠ΅ΠΌ ΠΏΡΠ΅ΡΠΈΠΊΡ βui-β) ΠΈ ΠΏΠΎΠΌΠ΅Π½ΡΠ΅ΠΌ ΡΠ°ΡΡΠΈΡΠ΅Π½ΠΈΠ΅ Π½Π° .yaml) - ΡΡΠΈΠ»ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΏΡΠ°Π²ΠΊΠΈ
βββ ui
βββ Chart.yaml
βββ templates
βββ deployment.yaml
βββ ingress.yaml
βββ service.yaml
ΠΠΎ-ΡΡΡΠΈ, ΡΡΠΎ ΡΠΆΠ΅ Π³ΠΎΡΠΎΠ²ΡΠΉ ΠΏΠ°ΠΊΠ΅Ρ Π΄Π»Ρ ΡΡΡΠ°Π½ΠΎΠ²ΠΊΠΈ Π² Kubernetes.
- Π£Π±Π΅Π΄ΠΈΠΌΡΡ, ΡΡΠΎ Ρ Π½Π°Ρ Π½Π΅ ΡΠ°Π·Π²Π΅ΡΠ½ΡΡΡ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ Π² kubernetes. ΠΡΠ»ΠΈ ΡΠ°Π·Π²Π΅ΡΠ½ΡΡΡ - ΡΠ΄Π°Π»ΠΈΠΌ ΠΈΡ
kubectl delete service ui -n dev
kubectl delete deploy ui -n dev
kubectl delete ingress ui -n dev
- Π£ΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ Chart:
helm install --name test-ui-1 ui/
Π·Π΄Π΅ΡΡ test-ui-1 - ΠΈΠΌΡ ΡΠ΅Π»ΠΈΠ·Π°;
ui/ - ΠΏΡΡΡ Π΄ΠΎ Chart'Π°.
- ΠΠΎΡΠΌΠΎΡΡΠΈΠΌ, ΡΡΠΎ ΠΏΠΎΠ»ΡΡΠΈΠ»ΠΎΡΡ:
helm ls
output
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
test-ui-1 1 Wed Jan 30 21:38:50 2019 DEPLOYED ui-1.0.0 1 default
Π’Π΅ΠΏΠ΅ΡΡ ΡΠ΄Π΅Π»Π°Π΅ΠΌ ΡΠ°ΠΊ, ΡΡΠΎΠ±Ρ ΠΌΠΎΠΆΠ½ΠΎ Π±ΡΠ»ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ 1 Chart Π΄Π»Ρ Π·Π°ΠΏΡΡΠΊΠ° Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΈΡ ΡΠΊΠ·Π΅ΠΌΠΏΠ»ΡΡΠΎΠ² (ΡΠ΅Π»ΠΈΠ·ΠΎΠ²). Π¨Π°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡΡΠ΅ΠΌ Π΅Π³ΠΎ:
ui/templates/service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
labels:
app: reddit
component: ui
release: {{ .Release.Name }}
spec:
type: NodePort
ports:
- port: 9292
protocol: TCP
targetPort: 9292
selector:
app: reddit
component: ui
release: {{ .Release.Name }}
ui/templates/service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }} # ΡΠ½ΠΈΠΊΠ°Π»ΡΠ½ΠΎΠ΅ ΠΈΠΌΡ Π·Π°ΠΏΡΡΠ΅Π½Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΡΡΡΠ°
labels:
app: reddit
component: ui
release: {{ .Release.Name }} # ΠΏΠΎΠΌΠ΅ΡΠ°Π΅ΠΌ, ΡΡΠΎ ΡΠ΅ΡΠ²ΠΈΡ ΠΈΠ· ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΠΎΠ³ΠΎ ΡΠ΅Π»ΠΈΠ·Π°
spec:
type: NodePort
ports:
- port: {{ .Values.service.externalPort }}
protocol: TCP
targetPort: 9292
selector:
app: reddit
component: ui
release: {{ .Release.Name }} # ΠΡΠ±ΠΈΡΠ°Π΅ΠΌ POD-Ρ ΡΠΎΠ»ΡΠΊΠΎ ΠΈΠ· ΡΡΠΎΠ³ΠΎ ΡΠ΅Π»ΠΈΠ·Π°
name: {{ .Release.Name }}-{{ .Chart.Name }}
ΠΠ΄Π΅ΡΡ ΠΌΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌ Π²ΡΡΡΠΎΠ΅Π½Π½ΡΠ΅ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅:
- .Release - Π³ΡΡΠΏΠΏΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ Ρ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠ΅ΠΉ ΠΎ ΡΠ΅Π»ΠΈΠ·Π΅ (ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΠΎΠΌ Π·Π°ΠΏΡΡΠΊΠ΅ ChartβΠ° Π² k8s)
- .Chart - Π³ΡΡΠΏΠΏΠ° ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ Ρ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠ΅ΠΉ ΠΎ ChartβΠ΅ (ΡΠΎΠ΄Π΅ΡΠΆΠΈΠΌΠΎΠ΅ ΡΠ°ΠΉΠ»Π° Chart.yaml)
Π’Π°ΠΊΠΆΠ΅ Π΅ΡΠ΅ Π΅ΡΡΡ Π³ΡΡΠΏΠΏΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ :
- .Template - ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΎ ΡΠ΅ΠΊΡΡΠ΅ΠΌ ΡΠ°Π±Π»ΠΎΠ½Π΅ (.Name ΠΈ .BasePath)
- .Capabilities - ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΎ Kubernetes (Π²Π΅ΡΡΠΈΡ, Π²Π΅ΡΡΠΈΠΈ API)
- .Files.Get - ΠΏΠΎΠ»ΡΡΠΈΡΡ ΡΠΎΠ΄Π΅ΡΠΆΠΈΠΌΠΎΠ΅ ΡΠ°ΠΉΠ»Π°
Π¨Π°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡΡΠ΅ΠΌ ΠΏΠΎΠ΄ΠΎΠ±Π½ΡΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ ΠΎΡΡΠ°Π»ΡΠ½ΡΠ΅ ΡΡΡΠ½ΠΎΡΡΠΈ
ui/templates/deployment.yaml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
labels:
app: reddit
component: ui
release: {{ .Release.Name }}
spec:
...
selector:
matchLabels:
app: reddit # ΠΠ°ΠΆΠ½ΠΎ, ΡΡΠΎΠ±Ρ selector deploymentβΠ°
component: ui # Π½Π°ΡΠ΅Π» ΡΠΎΠ»ΡΠΊΠΎ Π½ΡΠΆΠ½ΡΠ΅ PODβΡ
release: {{ .Release.Name }}
template:
metadata:
name: ui-pod
labels:
app: reddit
component: ui
release: {{ .Release.Name }}
spec:
containers:
- image: ozyab/ui
name: ui
ports:
- containerPort: 9292
name: ui
protocol: TCP
env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ui/templates/ingress.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: {{ .Release.Name }}-{{ .Chart.Name }}
servicePort: 9292
Π£ΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΡΠ΅Π»ΠΈΠ·ΠΎΠ² ui:
$ helm install ui --name ui-1
$ helm install ui --name ui-2
$ helm install ui --name ui-3
ΠΠΎΠ»ΠΆΠ½Ρ ΠΏΠΎΡΠ²ΠΈΡΡΡΡ 3 ΠΈΠ½Π³ΡΠ΅ΡΡΠ°: $ kubectl get ingress
output
NAME HOSTS ADDRESS PORTS AGE
ui-1-ui * 35.201.126.86 80 5m
ui-2-ui * 35.201.67.17 80 1m
ui-3-ui * 35.227.242.231 80 1m
ΠΡ ΡΠΆΠ΅ ΡΠ΄Π΅Π»Π°Π»ΠΈ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ Π·Π°ΠΏΡΡΠΊΠ° Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΈΡ Π²Π΅ΡΡΠΈΠΉ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ ΠΈΠ· ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΏΠ°ΠΊΠ΅ΡΠ° ΠΌΠ°Π½ΠΈΡΠ΅ΡΡΠΎΠ², ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ Π»ΠΈΡΡ Π²ΡΡΡΠΎΠ΅Π½Π½ΡΠ΅ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅. ΠΠ°ΡΡΠΎΠΌΠΈΠ·ΠΈΡΡΠ΅ΠΌ ΡΡΡΠ°Π½ΠΎΠ²ΠΊΡ ΡΠ²ΠΎΠΈΠΌΠΈ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠΌΠΈ (ΠΎΠ±ΡΠ°Π· ΠΈ ΠΏΠΎΡΡ).
ui/templates/deployment.yaml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
...
spec:
containers:
- image: "{{ .Values.image.repository }}/ui:{{ .Values.image.tag }}"
name: ui
ports:
- containerPort: {{ .Values.service.internalPort }}
ui/templates/service.yaml
---
apiVersion: v1
kind: Service
metadata:
...
spec:
type: NodePort
ports:
- port: {{ .Values.service.externalPort }}
protocol: TCP
targetPort: {{ .Values.service.internalPort }}
selector:
app: reddit
component: ui
release: {{ .Release.Name }}
ui/templates/ingress.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: {{ .Release.Name }}-{{ .Chart.Name }}
servicePort: {{ .Values.service.externalPort }}
ΠΠΏΡΠ΅Π΄Π΅Π»ΠΈΠΌ Π·Π½Π°ΡΠ΅Π½ΠΈΡ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΡΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ Π² ΡΠ°ΠΉΠ»Π΅ values.yaml:
ui/values.yaml
---
service:
internalPort: 9292
externalPort: 9292
image:
repository: ozyab/ui
tag: latest
ΠΠΎΠΆΠ½ΠΎ ΠΏΡΠΎΠΈΠ·Π²Π΅ΡΡΠΈ ΠΎΠ±Π½ΠΎΠ²Π»Π΅Π½ΠΈΠ΅ ΡΠ΅ΡΠ²ΠΈΡΠ°:
helm upgrade ui-1 ui/
helm upgrade ui-2 ui/
helm upgrade ui-3 ui/
ΠΡ ΡΠΎΠ±ΡΠ°Π»ΠΈ Chart Π΄Π»Ρ ΡΠ°Π·Π²Π΅ΡΡΡΠ²Π°Π½ΠΈΡ ui-ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ. ΠΠ½ ΠΈΠΌΠ΅Π΅Ρ ΡΠ»Π΅Π΄ΡΡΡΡΡ ΡΡΡΡΠΊΡΡΡΡ:
βββ ui
βββ Chart.yaml
βββ templates
β βββ deployment.yaml
β βββ ingress.yaml
β βββ service.yaml
βββ values.yaml
Π‘ΠΎΠ±Π΅ΡΠ΅ΠΌ ΠΏΠ°ΠΊΠ΅ΡΡ Π΄Π»Ρ ΠΎΡΡΠ°Π»ΡΠ½ΡΡ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ:
post/templates/service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
labels:
app: reddit
component: post
release: {{ .Release.Name }}
spec:
ports:
- port: {{ .Values.service.externalPort }}
protocol: TCP
targetPort: {{ .Values.service.internalPort }}
selector:
app: reddit
component: post
release: {{ .Release.Name }}
post/templates/deployment.yaml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
labels:
app: reddit
component: post
release: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: reddit
component: post
release: {{ .Release.Name }}
template:
metadata:
name: post
labels:
app: reddit
component: post
release: {{ .Release.Name }}
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: post
ports:
- containerPort: {{ .Values.service.internalPort }}
name: post
protocol: TCP
env:
- name: POST_DATABASE_HOST
value: postdb
ΠΠ±ΡΠ°ΡΠΈΠΌ Π²Π½ΠΈΠΌΠ°Π½ΠΈΠ΅ Π½Π° Π°Π΄ΡΠ΅Ρ ΠΠ:
env:
- name: POST_DATABASE_HOST
value: postdb
ΠΠΎΡΠΊΠΎΠ»ΡΠΊΡ Π°Π΄ΡΠ΅Ρ ΠΠ ΠΌΠΎΠΆΠ΅Ρ ΠΌΠ΅Π½ΡΡΡΡΡ Π² Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ ΠΎΡ ΡΡΠ»ΠΎΠ²ΠΈΠΉ Π·Π°ΠΏΡΡΠΊΠ°:
- Π±Π΄ ΠΎΡΠ΄Π΅Π»ΡΠ½ΠΎ ΠΎΡ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°
- Π±Π΄ Π·Π°ΠΏΡΡΠ΅Π½ΠΎ Π² ΠΎΡΠ΄Π΅Π»ΡΠ½ΠΎΠΌ ΡΠ΅Π»ΠΈΠ·Π΅
- ... Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΡΠ΄ΠΎΠ±Π½ΡΠΉ ΡΠ°Π±Π»ΠΎΠ½ Π΄Π»Ρ Π·Π°Π΄Π°Π½ΠΈΡ Π°Π΄ΡΠ΅ΡΠ° ΠΠ.
env:
- name: POST_DATABASE_HOST
value: {{ .Values.databaseHost }}
ΠΡΠ΄Π΅ΠΌ Π·Π°Π΄Π°Π²Π°ΡΡ Π±Π΄ ΡΠ΅ΡΠ΅Π· ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ databaseHost
. ΠΠ½ΠΎΠ³Π΄Π° Π»ΡΡΡΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΏΠΎΠ΄ΠΎΠ±Π½ΡΠΉ ΡΠΎΡΠΌΠ°Ρ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ
Π²ΠΌΠ΅ΡΡΠΎ ΡΡΡΡΠΊΡΡΡ database.host
, ΡΠ°ΠΊ ΠΊΠ°ΠΊ ΡΠΎΠ³Π΄Π° ΠΏΡΠΈΠΉΠ΄Π΅ΡΡΡ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΡΡ ΡΡΡΡΠΊΡΡΡΡ database
, ΠΈΠ½Π°ΡΠ΅ helm Π²ΡΠ΄Π°ΡΡ ΠΎΡΠΈΠ±ΠΊΡ.
ΠΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌ ΡΡΠ½ΠΊΡΠΈΡ default
. ΠΡΠ»ΠΈ databaseHost
Π½Π΅ Π±ΡΠ΄Π΅Ρ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π° ΠΈΠ»ΠΈ Π΅Π΅ Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅ Π±ΡΠ΄Π΅Ρ ΠΏΡΡΡΡΠΌ, ΡΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ
Π²ΡΠ²ΠΎΠ΄ ΡΡΠ½ΠΊΡΠΈΠΈ printf
(ΠΊΠΎΡΠΎΡΠ°Ρ ΠΏΡΠΎΡΡΠΎ ΡΠΎΡΠΌΠΈΡΡΠ΅Ρ ΡΡΡΠΎΠΊΡ <ΠΈΠΌΡΡΠ΅Π»ΠΈΠ·Π°>-mongodb
).
value: {{ .Values.databaseHost | default (printf "%s-mongodb" .Release.Name) }}
Π ΠΈΡΠΎΠ³Π΅ ΠΏΠΎΠ»ΡΡΠΈΡΡΡ:
env:
- name: POST_DATABASE_HOST
value: {{ .Values.databaseHost | default (printf "%s-mongodb" .Release.Name) }}
ΠΡΠ»ΠΈ databaseHost Π½Π΅ Π·Π°Π΄Π°Π½ΠΎ, ΡΠΎ Π±ΡΠ΄Π΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ Π°Π΄ΡΠ΅Ρ Π±Π°Π·Ρ, ΠΏΠΎΠ΄Π½ΡΡΠΎΠΉ Π²Π½ΡΡΡΠΈ ΡΠ΅Π»ΠΈΠ·Π°.
ΠΠΎΠΊΡΠΌΠ΅Π½ΡΠ°ΡΠΈΡ ΠΏΠΎ ΡΠ°Π±Π»ΠΎΠ½ΠΈΠ·Π°ΡΠΈΡΠΌ ΠΈ ΡΡΠ½ΠΊΡΠΈΡΠΌ.
post/values.yaml
---
service:
internalPort: 5000
externalPort: 5000
image:
repository: ozyab/post
tag: latest
databaseHost:
Π¨Π°Π±Π»ΠΎΠ½ΠΈΠ·ΠΈΡΡΠ΅ΠΌ ΡΠ΅ΡΠ²ΠΈΡ comment
comment/templates/deployment.yaml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
labels:
app: reddit
component: comment
release: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: reddit
component: comment
release: {{ .Release.Name }}
template:
metadata:
name: comment
labels:
app: reddit
component: comment
release: {{ .Release.Name }}
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: comment
ports:
- containerPort: {{ .Values.service.internalPort }}
name: comment
protocol: TCP
env:
- name: COMMENT_DATABASE_HOST
value: {{ .Values.databaseHost | default (printf "%s-mongodb" .Release.Name) }}
comment/templates/service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}
labels:
app: reddit
component: comment
release: {{ .Release.Name }}
spec:
type: ClusterIP
ports:
- port: {{ .Values.service.externalPort }}
protocol: TCP
targetPort: {{ .Values.service.internalPort }}
selector:
app: reddit
component: comment
release: {{ .Release.Name }}
post/values.yaml
---
service:
internalPort: 5000
externalPort: 5000
image:
repository: ozyab/post
tag: latest
databaseHost:
ΠΡΠΎΠ³ΠΎΠ²Π°Ρ ΡΡΡΡΠΊΡΡΡΠ° ΠΏΡΠΎΠ΅ΠΊΡΠ° Π²ΡΠ³Π»ΡΠ΄ΠΈΡ ΡΠ°ΠΊ:
$ tree
.
βββ comment
β βββ Chart.yaml
β βββ templates
β β βββ deployment.yml
β β βββ service.yml
β βββ values.yaml
βββ post
β βββ Chart.yaml
β βββ templates
β β βββ deployment.yaml
β β βββ service.yml
β βββ values.yaml
βββ ui
βββ Chart.yaml
βββ templates
β βββ deployment.yaml
β βββ ingress.yaml
β βββ service.yaml
βββ values.yaml
Π’Π°ΠΊΠΆΠ΅ ΡΡΠΎΠΈΡ ΠΎΡΠΌΠ΅ΡΠΈΡΡ ΡΡΠ½ΠΊΡΠΈΠΎΠ½Π°Π» helm ΠΏΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ helperβΠΎΠ² ΠΈ ΡΡΠ½ΠΊΡΠΈΠΈ templates. Helper - ΡΡΠΎ Π½Π°ΠΏΠΈΡΠ°Π½Π½Π°Ρ Π½Π°ΠΌΠΈ ΡΡΠ½ΠΊΡΠΈΡ. Π ΡΡΠ½ΠΊΡΠΈΡ ΠΎΠΏΠΈΡΡΠ²Π°Π΅ΡΡΡ, ΠΊΠ°ΠΊ ΠΏΡΠ°Π²ΠΈΠ»ΠΎ, ΡΠ»ΠΎΠΆΠ½Π°Ρ Π»ΠΎΠ³ΠΈΠΊΠ°. Π¨Π°Π±Π»ΠΎΠ½Ρ ΡΡΠΈΡ ΡΡΠ½ΠΊΡΠΈΡ ΡΠ°ΡΠΏΠΎΠ»ΠΎΠ³Π°ΡΡΡΡ Π² ΡΠ°ΠΉΠ»Π΅ _helpers.tpl.
ΠΡΠΈΠΌΠ΅Ρ ΡΡΠ½ΠΊΡΠΈΠΈ comment.fullname:
charts/comment/templates/_helpers.tpl
{{- define "comment.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name }}
{{- end -}}
ΠΊΠΎΡΠΎΡΠ°Ρ Π² ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠ΅ Π²ΡΠ΄Π°ΡΡ ΡΠΎ ΠΆΠ΅, ΡΡΠΎ ΠΈ:
{{ .Release.Name }}-{{ .Chart.Name }}
ΠΠ°ΠΌΠ΅Π½ΠΈΠΌ Π² ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΡΡΠΈΠ΅ ΡΡΡΠΎΡΠΊΠΈ Π² ΡΠ°ΠΉΠ»Π΅, ΡΡΠΎΠ±Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ helper:
charts/comment/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "comment.fullname" . }} # Π±ΡΠ»ΠΎ: {{ .Release.Name }}-{{ .Chart.Name }}
Ρ ΠΏΠΎΠΌΠΎΡΡΡ template Π²ΡΠ·ΡΠ²Π°Π΅ΡΡΡ ΡΡΠ½ΠΊΡΠΈΡ comment.fullname, ΠΎΠΏΠΈΡΠ°Π½Π½Π°Ρ ΡΠ°Π½Π΅Π΅ Π² ΡΠ°ΠΉΠ»Π΅ _helpers.tpl.
Π‘ΡΡΡΠΊΡΡΡΠ° ΠΈΠΏΠΎΡΡΠΈΡΡΡΡΠ΅ΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ template: {{ template "comment.fullname" . }}
Π³Π΄Π΅ template - ΡΡΠ½ΠΊΡΠΈΡ template
- comment.fullname - Π½Π°Π·Π²Π°Π½ΠΈΠ΅ ΡΡΠ½ΠΊΡΠΈΠΈ Π΄Π»Ρ ΠΈΠΌΠΏΠΎΡΡΠ°
- β.β - ΠΎΠ±Π»Π°ΡΡΡ Π²ΠΈΠ΄ΠΈΠΌΠΎΡΡΠΈ Π΄Π»Ρ ΠΈΠΌΠΏΠΎΡΡΠ°
β.β - Π²ΡΡ ΠΎΠ±Π»Π°ΡΡΡ Π²ΠΈΠ΄ΠΈΠΌΠΎΡΡΠΈ Π²ΡΠ΅Ρ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ (ΠΌΠΎΠΆΠ½ΠΎ ΠΏΠ΅ΡΠ΅Π΄Π°ΡΡ .Chart, ΡΠΎΠ³Π΄Π° .Values Π½Π΅ Π±ΡΠ΄ΡΡ Π΄ΠΎΡΡΡΠΏΠ½Ρ Π²Π½ΡΡΡΠΈ ΡΡΠ½ΠΊΡΠΈΠΈ)
- Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΡΠ°ΠΉΠ» _helpers.tpl Π² ΠΏΠ°ΠΏΠΊΠ°Ρ templates ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² ui, post ΠΈ comment
- ΠΡΡΠ°Π²ΠΈΠΈΠΌ ΡΡΠ½ΠΊΡΠΈΡ β.fullnameβ Π² ΠΊΠ°ΠΆΠ΄ΡΠΉ _helpers.tpl ΡΠ°ΠΉΠ». Π·Π°ΠΌΠ΅Π½ΠΈΡΡ Π½Π° ΠΈΠΌΡ ΡΠ°ΡΡΠ° ΡΠΎΠΎΡΠ²Π΅ΡΡΠ²ΡΡΡΠ΅Π³ΠΎ ΡΠ΅ΡΠ²ΠΈΡΠ°
- Π ΠΊΠ°ΠΆΠ΄ΠΎΠΌ ΠΈΠ· ΡΠ°Π±Π»ΠΎΠ½ΠΎΠ² ΠΌΠ°Π½ΠΈΡΠ΅ΡΡΠΎΠ² Π²ΡΡΠ°Π²ΠΈΠΌ ΡΠ»Π΅Π΄ΡΡΡΡΡ ΡΡΠ½ΠΊΡΠΈΡ ΡΠ°ΠΌ, Π³Π΄Π΅ ΡΡΠΎ ΡΡΠ΅Π±ΡΠ΅ΡΡΡ (Π±ΠΎΠ»ΡΡΠΈΠ½ΡΡΠ²ΠΎ ΠΏΠΎΠ»Π΅ΠΉ ΡΡΠΎ name:)
{{ template "comment.fullname" . }}
ΠΡ ΡΠΎΠ·Π΄Π°Π»ΠΈ ChartβΡ Π΄Π»Ρ ΠΊΠ°ΠΆΠ΄ΠΎΠΉ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ Π½Π°ΡΠ΅Π³ΠΎ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ. ΠΠ°ΠΆΠ΄ΡΠΉ ΠΈΠ· Π½ΠΈΡ
ΠΌΠΎΠΆΠ½ΠΎ Π·Π°ΠΏΡΡΡΠΈΡΡ ΠΏΠΎ-ΠΎΡΠ΄Π΅Π»ΡΠ½ΠΎΡΡΠΈ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ $ helm install <chart-path> <release-name>
,
Π½ΠΎ ΠΎΠ½ΠΈ Π±ΡΠ΄ΡΡ Π·Π°ΠΏΡΡΠΊΠ°ΡΡΡΡ Π² ΡΠ°Π·Π½ΡΡ
ΡΠ΅Π»ΠΈΠ·Π°Ρ
, ΠΈ Π½Π΅ Π±ΡΠ΄ΡΡ Π²ΠΈΠ΄Π΅ΡΡ Π΄ΡΡΠ³ Π΄ΡΡΠ³Π°.
Π‘ ΠΏΠΎΠΌΠΎΡΡΡ ΠΌΠ΅Ρ
Π°Π½ΠΈΠ·ΠΌΠ° ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΡΠΌΠΈ ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π΅Π΄ΠΈΠ½ΡΠΉ Chart reddit
, ΠΊΠΎΡΠΎΡΡΠΉ ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½ΠΈΡ Π½Π°ΡΠΈ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ.
Π‘ΡΡΡΠΊΡΡΡΠ° ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ reddit:
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΡΠ°ΠΉΠ»:
reddit/Chart.yaml
name: reddit
version: 0.1.0
description: OTUS sample reddit application
maintainers:
- name: Vyacheslav Egorov
email: 692677@mail.ru
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΠΏΡΡΡΠΎΠΉ ΡΠ°ΠΉΠ» reddit/values.yaml.
Π€Π°ΠΉΠ» Ρ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΌΠΈ:
reddit/requirements.yaml
dependencies:
- name: ui # ΠΠΌΡ ΠΈ Π²Π΅ΡΡΠΈΡ Π΄ΠΎΠ»ΠΆΠ½Ρ ΡΠΎΠ²ΠΏΠ°Π΄Π°ΡΡ
version: "1.0.0" # Ρ ΡΠΎΠ΄Π΅ΡΠ°ΠΆΠ°Π½ΠΈΠ΅ΠΌ ui/Chart.yml
repository: "file://../ui" # ΠΡΡΡ ΠΎΡΠ½ΠΎΡΠΈΡΠ΅Π»ΡΠ½ΠΎ ΡΠ°ΡΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΡΠ°ΠΌΠΎΠ³ΠΎ requiremetns.yml
- name: post
version: 1.0.0
repository: file://../post
- name: comment
version: 1.0.0
repository: file://../comment
ΠΡΠΆΠ½ΠΎ Π·Π°Π³ΡΡΠ·ΠΈΡΡ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ (ΠΊΠΎΠ³Π΄Π° Chartβ Π½Π΅ ΡΠΏΠ°ΠΊΠΎΠ²Π°Π½ Π² tgz Π°ΡΡ ΠΈΠ²):
$ helm dep update
ΠΡΠ΄Π΅Ρ ΡΠΎΠ·Π΄Π°Π½ ΡΠ°ΠΉΠ» requirements.lock Ρ ΡΠΈΠΊΡΠ°ΡΠΈΠ΅ΠΉ Π·Π°Π²ΠΈΡΠΈΡΠΌΠΎΡΡΠ΅ΠΉ. Π’Π°ΠΊΠΆΠ΅ Π±ΡΠ΄Π΅Ρ ΡΠΎΠ·Π΄Π°Π½Π° Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΡ Ρ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΡΠΌΠΈ Π² Π²ΠΈΠ΄Π΅ Π°ΡΡ ΠΈΠ²ΠΎΠ².
Π‘ΡΡΡΠΊΡΡΡΠ° ΠΏΠ°ΠΏΠΎΠΊ:
βββ Chart.yaml
βββ charts
β βββ comment-1.0.0.tgz
β βββ post-1.0.0.tgz
β βββ ui-1.0.0.tgz
βββ requirements.lock
βββ requirements.yaml
βββ values.yaml
Chart Π΄Π»Ρ Π±Π°Π·Ρ Π΄Π°Π½Π½ΡΡ Π½Π΅ Π±ΡΠ΄Π΅ΠΌ ΡΠΎΠ·Π΄Π°Π²Π°ΡΡ Π²ΡΡΡΠ½ΡΡ. ΠΠΎΠ·ΡΠΌΠ΅ΠΌ Π³ΠΎΡΠΎΠ²ΡΠΉ.
ΠΠ°ΠΉΠ΄Π΅ΠΌ Chart Π² ΠΎΠ±ΡΠ΅Π΄ΠΎΡΡΡΠΏΠ½ΠΎΠΌ ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΈ: $ helm search mongo
output
NAME CHART VERSION APP VERSION DESCRIPTION
stable/mongodb 5.3.1 4.0.5 NoSQL document-oriented database that stores JSON-like do...
stable/mongodb-replicaset 3.9.0 3.6 NoSQL document-oriented database that stores JSON-like do...
ΠΠΎΠ±Π°Π²ΠΈΠΌ Π² reddit/requirements.yml:
reddit/requirements.yml
dependencies:
...
- name: comment
version: 1.0.0
repository: file://../comment
- name: mongodb
version: 0.4.18
repository: https://kubernetes-charts.storage.googleapis.com
ΠΡΠ³ΡΡΠ·ΠΈΠΌ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ: helm dep update
Π£ΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅: kubernetes/Charts $ helm install reddit --name reddit-test
ΠΠ°ΠΉΠ΄Π΅ΠΌ IP-Π°Π΄ΡΠ΅Ρ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ: kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
reddit-test-ui * 35.201.126.86 80 1m
ΠΡΡΡ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠ° Ρ ΡΠ΅ΠΌ, ΡΡΠΎ UI-ΡΠ΅ΡΠ²ΠΈΡ Π½Π΅ Π·Π½Π°Π΅Ρ ΠΊΠ°ΠΊ ΠΏΡΠ°Π²ΠΈΠ»ΡΠ½ΠΎ Ρ ΠΎΠ΄ΠΈΡΡ Π² post ΠΈ comment ΡΠ΅ΡΠ²ΠΈΡΡ. ΠΠ΅Π΄Ρ ΠΈΡ ΠΈΠΌΠ΅Π½Π° ΡΠ΅ΠΏΠ΅ΡΡ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΈ Π·Π°Π²ΠΈΡΡΡ ΠΎΡ ΠΈΠΌΠ΅Π½ ΡΠ°ΡΡΠΎΠ².
Π Dockerfile UI-ΡΠ΅ΡΠ²ΠΈΡΠ° ΡΠΆΠ΅ Π·Π°Π΄Π°Π½Ρ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ. ΠΠ°Π΄ΠΎ, ΡΡΠΎΠ±Ρ ΠΎΠ½ΠΈ ΡΠΊΠ°Π·ΡΠ²Π°Π»ΠΈ Π½Π° Π½ΡΠΆΠ½ΡΠ΅ Π±Π΅ΠΊΠ΅Π½Π΄Ρ:
ENV POST_SERVICE_HOST post
ENV POST_SERVICE_PORT 5000
ENV COMMENT_SERVICE_HOST comment
ENV COMMENT_SERVICE_PORT 9292
ΠΠΎΠ±Π°Π²ΠΈΠΌ Π² ui/deployments.yaml:
ui/deployments.yaml
...
spec:
...
env:
- name: POST_SERVICE_HOST
value: {{ .Values.postHost | default (printf "%s-post" .Release.Name) }}
- name: POST_SERVICE_PORT
value: {{ .Values.postPort | default "5000" | quote }}
- name: COMMENT_SERVICE_HOST
value: {{ .Values.commentHost | default (printf "%s-comment" .Release.Name) }}
- name: COMMENT_SERVICE_PORT
value: {{ .Values.commentPort | default "9292" | quote }}
# quote - ΡΡΠ½ΠΊΡΠΈΡ Π΄Π»Ρ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΡ ΠΊΠ°Π²ΡΡΠ΅ΠΊ. ΠΠ»Ρ ΡΠΈΡΠ΅Π» ΠΈ Π±ΡΠ»Π΅Π²ΡΡ
Π·Π½Π°ΡΠ΅Π½ΠΈΠΉ ΡΡΠΎ Π²Π°ΠΆΠ½ΠΎ
...
ΠΠΎΠ±Π°Π²ΠΈΠΌ Π² ui/values.yaml (ΡΡΡΠ»ΠΊΠ° Π½Π° gist)
ui/values.yaml
...
postHost:
postPort:
commentHost:
commentPort:
Π’Π΅ΠΏΡΠ΅Ρ ΠΌΠΎΠΆΠ½ΠΎ Π·Π°Π΄Π°Π²Π°ΡΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ Π΄Π»Ρ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠ΅ΠΉ ΠΏΡΡΠΌΠΎ Π² values.yaml ΡΠ°ΠΌΠΎΠ³ΠΎ ChartβΠ° reddit. ΠΠ½ΠΈ ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΠΈΡΡΠ²Π°ΡΡ Π·Π½Π°ΡΠ΅Π½ΠΈΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ ΠΈΠ· Π·Π°Π²ΠΈΡΠΈΠΌΡΡ ΡΠ°ΡΡΠΎΠ²:
comment: # ΡΡΡΠ»Π°Π΅ΠΌΡΡ Π½Π° ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ ΡΠ°ΡΡΠΎΠ² ΠΈΠ· Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠ΅ΠΉ
image:
repository: ozyab/comment
tag: latest
service:
externalPort: 9292
post:
image:
repository: ozyab/post
tag: latest
service:
externalPort: 5000
ui:
image:
repository: ozyab/ui
tag: latest
service:
externalPort: 9292
ΠΠΎΡΠ»Π΅ ΠΎΠ±Π½ΠΎΠ²Π»Π΅Π½ΠΈΡ UI - Π½ΡΠΆΠ½ΠΎ ΠΎΠ±Π½ΠΎΠ²ΠΈΡΡ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ ΡΠ°ΡΡΠ° reddit: $ helm dep update ./reddit
ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅Π»ΠΈΠ·, ΡΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½Π½ΡΠΉ Π² k8s: $ helm upgrade <release-name> ./reddit
- Π£ΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ Gitlab
Gitlab Π±ΡΠ΄Π΅ΠΌ ΡΡΠ°Π²ΠΈΡΡ ΡΠ°ΠΊΠΆΠ΅ Ρ ΠΏΠΎΠΌΠΎΡΡΡ Helm ChartβΠ° ΠΈΠ· ΠΏΠ°ΠΊΠ΅ΡΠ° Omnibus.
- ΠΠΎΠ±Π°Π²ΠΈΠΌ ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ Gitlab
$ helm repo add gitlab https://charts.gitlab.io
- ΠΡ Π±ΡΠ΄Π΅ΠΌ ΠΌΠ΅Π½ΡΡΡ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΡ Gitlab, ΠΏΠΎΡΡΠΎΠΌΡ ΡΠΊΠ°ΡΠ°Π΅ΠΌ Chart
$ helm fetch gitlab/gitlab-omnibus --version 0.1.37 --untar
$ cd gitlab-omnibus
- ΠΠΎΠΏΡΠ°ΠΈΠΌ gitlab-omnibus/values.yaml
baseDomain: example.com
legoEmail: you@example.com
- ΠΠΎΠ±Π°Π²ΡΡΠ΅ Π² gitlab-omnibus/templates/gitlab/gitlab-svc.yaml:
...
- name: web
port: 80
targetPort: workhorse
- ΠΠΎΠΏΡΠ°Π²ΠΈΡΡ Π² gitlab-omnibus/templates/gitlab-config.yaml:
...
heritage: "{{ .Release.Service }}"
data:
external_scheme: http
external_hostname: {{ template "fullname" . }}
...
- ΠΠΎΠΏΡΠ°Π²ΠΈΠΌ Π² gitlab-omnibus/templates/ingress/gitlab-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
...
spec:
tls:
...
rules:
- host: {{ template "fullname" . }}
http:
paths:
...
Π£ΡΡΠ°Π½ΠΎΠ²ΠΈΠΌ GitLab: $ helm install --name gitlab . -f values.yaml
ΠΠ°ΠΉΠ΄Π΅ΠΌ Π²ΡΠ΄Π°Π½Π½ΡΠΉ IP-Π°Π΄ΡΠ΅Ρ ingress-ΠΊΠΎΠ½ΡΡΠΎΠ»Π»Π΅ΡΠ°: $ kubectl get service -n nginx-ingress nginx
ΠΠΎΠΌΠ΅ΡΡΠΈΠΌ Π·Π°ΠΏΠΈΡΡ Π² Π»ΠΎΠΊΠ°Π»ΡΠ½ΡΠΉ /etc/hosts: # echo "35.184.43.93 gitlab-gitlab staging production" >> /etc/hosts
ΠΠ΄Π΅ΠΌ ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ http://gitlab-gitlab ΠΈ ΡΡΠ°Π²ΠΈΠΌ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΡΠΉ ΠΏΠ°ΡΠΎΠ»Ρ.
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π³ΡΡΠΏΠΏΡ ozyab. Π Π½Π°ΡΡΡΠΎΠΉΠΊΠ°Ρ Π³ΡΡΠΏΠΏΡ Π²ΡΠ±Π΅ΡΠ΅ΠΌ CI/CD ΠΈ Π½Π°ΡΡΡΠΎΠΈΠΌ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ CI_REGISTRY_USER ΠΈ CI_REGISTRY_PASSWORD - Π»ΠΎΠ³ΠΈΠ½ ΠΈ ΠΏΠ°ΡΠΎΠ»Ρ ΠΎΡ dockerhub. ΠΡΠΈ ΡΡΠ΅ΡΠ½ΡΠ΅ Π΄Π°Π½Π½ΡΠ΅ Π±ΡΠ΄ΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½Ρ ΠΏΡΠΈ ΡΠ±ΠΎΡΠΊΠ΅ ΠΈ ΡΠ΅Π»ΠΈΠ·Π΅ docker-ΠΎΠ±ΡΠ°Π·ΠΎΠ² Ρ ΠΏΠΎΠΌΠΎΡΡΡ Gitlab CI.
Π Π³ΡΡΠΏΠΏΠ΅ ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π½ΠΎΠ²ΡΠΉ ΠΏΡΠΎΠ΅ΠΊΡ reddit-deploy, Π° ΡΠ°ΠΊΠΆΠ΅ comment, post ΠΈ ui.
ΠΠΎΠΊΠ°Π»ΡΠ½ΠΎ ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Ρ ΡΠ΅Π±Ρ Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΡ Gitlab_ci ΡΠΎ ΡΠ»Π΅Π΄ΡΡΡΠ΅ΠΉ ΡΡΡΡΠΊΡΡΡΠΎΠΉ:
Gitlab_ci
βββ comment
βββ post
βββ reddit-deploy
βββ ui
ΠΠ΅ΡΠ΅Π½Π΅ΡΠ΅ΠΌ ΠΈΡΡ ΠΎΠ΄Π½ΡΠ΅ ΠΊΠΎΠ΄Ρ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² ΠΈΠ· src/ Π² kubernetes/Gitlab_ci/ui.
Π Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΠΈ Gitlab_ci/ui:
- ΠΠ½ΠΈΡΠΈΠ°Π»ΠΈΠ·ΠΈΡΡΠ΅ΠΌ Π»ΠΎΠΊΠ°Π»ΡΠ½ΡΠΉ git-ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ:
$ git init
- ΠΠΎΠ±Π°Π²ΠΈΠΌ ΡΠ΄Π°Π»Π΅Π½Π½ΡΠΉ ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ
$ git remote add origin http://gitlab-gitlab/ozyab/ui.git
- ΠΠ°ΠΊΠΎΠΌΠΌΠΈΡΠΈΠΌ ΠΈ ΠΎΡΠΏΡΠ°Π²ΠΈΠΌ Π² gitlab:
$ git add .
$ git commit -m "init"
$ git push origin master
ΠΠ΅ΡΠ΅Π½Π΅ΡΠ΅ΠΌ ΡΠΎΠ΄Π΅ΡΠΆΠΈΠΌΠΎΠ΅ Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΠΈ Charts (ΠΏΠ°ΠΏΠΊΠΈ ui, post, comment, reddit) Π² Gitlab_ci/reddit-deploy ΠΈ Π·Π°ΠΏΡΡΠΈΠΌ Π² reddit-deploy.
ΠΠΎΠ±Π°Π²ΠΈΠΌ ΡΠ°ΠΉΠ» .gitlab-ci.yml, Π·Π°ΠΏΡΡΠΈΠΌ Π΅Π³ΠΎ ΠΈ ΠΏΡΠΎΠ²Π΅ΡΠΈΠΌ, ΡΡΠΎ ΡΠ±ΠΎΡΠΊΠ° ΡΡΠΏΠ΅ΡΠ½Π°.
Π ΡΠ΅ΠΊΡΡΠ΅ΠΉ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΠΈ CI Π²ΡΠΏΠΎΠ»Π½ΡΠ΅Ρ:
- Build: Π‘Π±ΠΎΡΠΊΡ Π΄ΠΎΠΊΠ΅Ρ-ΠΎΠ±ΡΠ°Π·Π° Ρ ΡΠ΅Π³ΠΎΠΌ master
- Test: Π€ΠΈΠΊΡΠΈΠ²Π½ΠΎΠ΅ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅
- Release: Π‘ΠΌΠ΅Π½Ρ ΡΠ΅Π³Π° Ρ master Π½Π° ΡΠ΅Π³ ΠΈΠ· ΡΠ°ΠΉΠ»Π° VERSION ΠΈ ΠΏΡΡ docker-ΠΎΠ±ΡΠ°Π·Π° Ρ Π½ΠΎΠ²ΡΠΌ ΡΠ΅Π³ΠΎΠΌ
Job Π΄Π»Ρ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ ΠΊΠ°ΠΆΠ΄ΠΎΠΉ Π·Π°Π΄Π°ΡΠΈ Π·Π°ΠΏΡΡΠΊΠ°Π΅ΡΡΡ Π² ΠΎΡΠ΄Π΅Π»ΡΠ½ΠΎΠΌ Kubernetes POD-Π΅.
Π’ΡΠ΅Π±ΡΠ΅ΠΌΡΠ΅ ΠΎΠΏΠ΅ΡΠ°ΡΠΈΠΈ Π²ΡΠ·ΡΠ²Π°ΡΡΡΡ Π² Π±Π»ΠΎΠΊΠ°Ρ script:
script:
- setup_docker
- build
ΠΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΠ°ΠΌΠΈΡ ΠΎΠΏΠ΅ΡΠ°ΡΠΈΠΉ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΡΡ Π² Π²ΠΈΠ΄Π΅ bash-ΡΡΠ½ΠΊΡΠΈΠΉ Π² Π±Π»ΠΎΠΊΠ΅ .auto_devops:
.auto_devops: &auto_devops |
function setup_docker() {
β¦
}
function release() {
β¦
}
function build() {
β¦
}
ΠΠ»Ρ Post ΠΈ Comment ΡΠ°ΠΊΠΆΠ΅ Π΄ΠΎΠ±Π°Π²ΠΈΠΌ Π² ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ .gitlab-ci.yml.
ΠΠ°Π΄ΠΈΠΌ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΡΠ°Π·ΡΠ°Π±ΠΎΡΡΠΈΠΊΡ Π·Π°ΠΏΡΡΠΊΠ°ΡΡ ΠΎΡΠ΄Π΅Π»ΡΠ½ΠΎΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠ΅ Π² Kubernetes ΠΏΠΎ ΠΊΠΎΠΌΠΌΠΈΡΡ Π² feature-Π±ΡΠ°Π½Ρ.
ΠΠ΅ΠΌΠ½ΠΎΠ³ΠΎ ΠΎΠ±Π½ΠΎΠ²ΠΈΠΌ ΠΊΠΎΠ½ΡΠΈΠ³ ΠΈΠ½Π³ΡΠ΅ΡΡΠ° Π΄Π»Ρ ΡΠ΅ΡΠ²ΠΈΡΠ° UI:
reddit-deploy/ui/templates/ingress.yml
...
name: {{ template "ui.fullname" . }}
annotations:
kubernetes.io/ingress.class: {{ .Values.ingress.class }}
spec:
rules:
- host: {{ .Values.ingress.host | default .Release.Name }}
http:
paths:
- path: / # Π ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΊΠΎΠ½ΡΡΠΎΠ»Π»Π΅ΡΠ° nginx, ΠΏΠΎΡΡΠΎΠΌΡ ΠΏΡΠ°Π²ΠΈΠ»ΠΎ Π΄ΡΡΠ³ΠΎΠ΅
backend:
serviceName: {{ template "ui.fullname" . }}
servicePort: {{ .Values.service.externalPort }}
reddit-deploy/ui/values.yml
...
ingress:
class: nginx # ΠΡΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ nginx-ingress, ΠΊΠΎΡΠΎΡΡΠΉ Π±ΡΠ» ΠΏΠΎΡΡΠ°Π²Π»Π΅Π½ Π²ΠΌΠ΅ΡΡΠ΅ Ρ gitlab-ΠΎΠΌ
... # (ΡΠ°ΠΊ Π±ΡΡΡΡΠ΅Π΅ ΠΈ ΠΏΡΠ°Π²ΠΈΠ»Π° Π±ΠΎΠ»Π΅Π΅ Π³ΠΈΠ±ΠΊΠΈΠ΅, ΡΠ΅ΠΌ Ρ GCP)
- Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π½ΠΎΠ²ΡΠΉ Π±ΡΠ°Π½Ρ Π² ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΈ ui
$ git checkout -b feature/3
- ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ui/.gitlab-ci.yml
- ΠΠ°ΠΊΠΎΠΌΠΌΠΈΡΠΈΠΌ ΠΈ Π·Π°ΠΏΡΡΠΈΠΌ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΡ:
$ git commit -am "Add review feature"
$ git push origin feature/3
ΠΡ Π΄ΠΎΠ±Π°Π²ΠΈΠ»ΠΈ ΡΡΠ°Π΄ΠΈΡ review, Π·Π°ΠΏΡΡΠΊΠ°ΡΡΡΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ Π² k8s ΠΏΠΎ ΠΊΠΎΠΌΠΌΠΈΡΡ Π² feature-Π±ΡΠ°Π½ΡΠΈ (Π½Π΅ master):
review:
stage: review
script:
- install_dependencies
- ensure_namespace
- install_tiller
- deploy
variables:
KUBE_NAMESPACE: review
host: $CI_PROJECT_PATH_SLUG-$CI_COMMIT_REF_SLUG
environment:
name: review/$CI_PROJECT_PATH/$CI_COMMIT_REF_NAME
url: http://$CI_PROJECT_PATH_SLUG-$CI_COMMIT_REF_SLUG
only:
refs:
- branches
kubernetes: active
except:
- master
ΠΡ Π΄ΠΎΠ±Π°Π²ΠΈΠ»ΠΈ ΡΡΠ½ΠΊΡΠΈΡ deploy, ΠΊΠΎΡΠΎΡΠ°Ρ Π·Π°Π³ΡΡΠΆΠ°Π΅Ρ Chart ΠΈΠ· ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΡ reddit-deploy ΠΈ Π΄Π΅Π»Π°Π΅Ρ ΡΠ΅Π»ΠΈΠ· Π² Π½Π΅ΠΉΠΌΡΠΏΠ΅ΠΉΡΠ΅ review Ρ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ, ΡΠΎΠ±ΡΠ°Π½Π½ΡΠΌ Π½Π° ΡΡΠ°Π΄ΠΈΠΈ build:
function deploy() {
...
echo "Clone deploy repository..."
git clone http://gitlab-gitlab/$CI_PROJECT_NAMESPACE/reddit-deploy.git
echo "Download helm dependencies..."
helm dep update reddit-deploy/reddit
echo "Deploy helm release $name to $KUBE_NAMESPACE"
helm upgrade --install \
--wait \
--set ui.ingress.host="$host" \
--set $CI_PROJECT_NAME.image.tag=$CI_APPLICATION_TAG \
--namespace="$KUBE_NAMESPACE" \
--version="$CI_PIPELINE_ID-$CI_JOB_ID" \
"$name" \
reddit-deploy/reddit/
}
ΠΠΎΠΆΠ΅ΠΌ ΡΠ²ΠΈΠ΄Π΅ΡΡ ΠΊΠ°ΠΊΠΈΠ΅ ΡΠ΅Π»ΠΈΠ·Ρ Π·Π°ΠΏΡΡΠ΅Π½Ρ helm ls
:
NAME REVISION UPDATED STATUS CHART NAMESPACE
gitlab 1 Sun Feb 3 18:07:41 2019 DEPLOYED gitlab-omnibus-0.1.37 default
review-ozyab-ui-f-92dwpg 1 Sun Feb 3 19:42:54 2019 DEPLOYED reddit-0.1.0 review
Π‘ΠΎΠ·Π΄Π°Π½Π½ΡΠ΅ Π΄Π»Ρ ΡΠ°ΠΊΠΈΡ ΡΠ΅Π»Π΅ΠΉ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ Π²ΡΠ΅ΠΌΠ΅Π½Π½Ρ, ΠΈΡ ΡΡΠ΅Π±ΡΠ΅ΡΡΡ βΡΠ±ΠΈΠ²Π°ΡΡβ, ΠΊΠΎΠ³Π΄Π° ΠΎΠ½ΠΈ Π±ΠΎΠ»ΡΡΠ΅ Π½Π΅ Π½ΡΠΆΠ½Ρ. ΠΠΎΠ±Π°Π²ΠΈΠΌ Π² .gitlab-ci.yml:
stop_review:
stage: cleanup
variables:
GIT_STRATEGY: none
script:
- install_dependencies
- delete
environment:
name: review/$CI_PROJECT_PATH/$CI_COMMIT_REF_NAME
action: stop
when: manual
allow_failure: true
only:
refs:
- branches
kubernetes: active
except:
- master
ΠΠΎΠ±Π°Π²ΠΈΠΌ ΡΡΠ½ΠΊΡΠΈΡ ΡΠ΄Π°Π»Π΅Π½ΠΈΡ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ:
function delete() {
track="${1-stable}"
name="$CI_ENVIRONMENT_SLUG"
helm delete "$name" --purge || true
}
Π£Π΄Π°Π»ΠΈΠΌ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠ΅ Π½Π°ΠΆΠ°Π² Π½Π° ΠΊΠ½ΠΎΠΏΠΊΡ Π²ΡΡΠ΅.
helm ls
:
NAME REVISION UPDATED STATUS CHART NAMESPACE
gitlab 1 Sun Feb 3 18:07:41 2019 DEPLOYED gitlab-omnibus-0.1.37 default
Π‘ΠΊΠΎΠΏΠΈΡΡΠ΅ΠΌ ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΠΉ ΡΠ°ΠΉΠ» .gitlab-ci.yml Π΄Π»Ρ ui Π² ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΈ Π΄Π»Ρ post ΠΈ comment.
Π’Π΅ΠΏΠ΅ΡΡ ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ staging ΠΈ production ΡΡΠ΅Π΄Ρ Π΄Π»Ρ ΡΠ°Π±ΠΎΡΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ Π² ΡΠ°ΠΉΠ» reddit-deploy/.gitlab-ci.yml
ΠΠ°ΠΏΡΡΠΈΠΌ Π² ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ reddit-deploy Π²Π΅ΡΠΊΡ master
ΠΡΠΎΡ ΡΠ°ΠΉΠ» ΠΎΡΠ»ΠΈΡΠ°Π΅ΡΡΡ ΠΎΡ ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠΈΡ ΡΠ΅ΠΌ, ΡΡΠΎ:
- ΠΠ΅ ΡΠΎΠ±ΠΈΡΠ°Π΅Ρ docker-ΠΎΠ±ΡΠ°Π·Ρ
- ΠΠ΅ΠΏΠ»ΠΎΠΈΡ Π½Π° ΡΡΠ°ΡΠΈΡΠ½ΡΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ (staging ΠΈ production)
- ΠΠ΅ ΡΠ΄Π°Π»ΡΠ΅Ρ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ
Service - Π°Π±ΡΡΡΠ°ΠΊΡΠΈΠΈ, ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΡΡΠ΅ΠΉ ΠΊΠΎΠ½Π΅ΡΠ½ΡΠ΅ ΡΠ·Π»Ρ Π΄ΠΎΡΡΡΠΏΠ° (EndpointβΡ) ΠΈ ΡΠΏΠΎΡΠΎΠ± ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΈ Ρ Π½ΠΈΠΌΠΈ (nodePort, LoadBalancer, ClusterIP).
Service - ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅Ρ ΠΊΠΎΠ½Π΅ΡΠ½ΡΠ΅ ΡΠ·Π»Ρ Π΄ΠΎΡΡΡΠΏΠ° (EndpointβΡ):
- ΡΠ΅Π»Π΅ΠΊΡΠΎΡΠ½ΡΠ΅ ΡΠ΅ΡΠ²ΠΈΡΡ (k8s ΡΠ°ΠΌ Π½Π°Ρ ΠΎΠ΄ΠΈΡ POD-Ρ ΠΏΠΎ labelβΠ°ΠΌ)
- Π±Π΅Π·ΡΠ΅Π»Π΅ΠΊΡΠΎΡΠ½ΡΠ΅ ΡΠ΅ΡΠ²ΠΈΡΡ (ΠΌΡ Π²ΡΡΡΠ½ΡΡ ΠΎΠΏΠΈΡΡΠ²Π°Π΅ΠΌ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΡΠ΅ endpointβΡ)
ΠΈ ΡΠΏΠΎΡΠΎΠ± ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΈ Ρ Π½ΠΈΠΌΠΈ (ΡΠΈΠΏ (type) ΡΠ΅ΡΠ²ΠΈΡΠ°):
- ClusterIP - Π΄ΠΎΠΉΡΠΈ Π΄ΠΎ ΡΠ΅ΡΠ²ΠΈΡΠ° ΠΌΠΎΠΆΠ½ΠΎ ΡΠΎΠ»ΡΠΊΠΎ ΠΈΠ·Π½ΡΡΡΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°
- nodePort - ΠΊΠ»ΠΈΠ΅Π½Ρ ΡΠ½Π°ΡΡΠΆΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ° ΠΏΡΠΈΡ ΠΎΠ΄ΠΈΡ Π½Π° ΠΎΠΏΡΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½Π½ΡΠΉ ΠΏΠΎΡΡ
- LoadBalancer - ΠΊΠ»ΠΈΠ΅Π½Ρ ΠΏΡΠΈΡ ΠΎΠ΄ΠΈΡ Π½Π° ΠΎΠ±Π»Π°ΡΠ½ΡΠΉ (aws elb, Google gclb) ΡΠ΅ΡΡΡΡ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠΈ
- ExternalName - Π²Π½Π΅ΡΠ½ΠΈΠΉ ΡΠ΅ΡΡΡΡ ΠΏΠΎ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ ΠΊ ΠΊΠ»Π°ΡΡΠ΅ΡΡ
Π Π°Π½Π΅Π΅ Π±ΡΠ» ΠΎΠΏΠΈΡΠ°Π½ service:
post-service.yml
---
apiVersion: v1
kind: Service
metadata:
name: post
labels:
app: reddit
component: post
spec:
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: reddit
component: post
ΠΡΠΎ ΡΠ΅Π»Π΅ΠΊΡΠΎΡΠ½ΡΠΉ ΡΠ΅ΡΠ²ΠΈΡ ΡΠΈΠΏΠ° ClusetrIP (ΡΠΈΠΏ Π½Π΅ ΡΠΊΠ°Π·Π°Π½, Ρ.ΠΊ. ΡΡΠΎΡ ΡΠΈΠΏ ΠΏΠΎ-ΡΠΌΠΎΠ»ΡΠ°Π½ΠΈΡ).
ClusterIP - ΡΡΠΎ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ (Π² ΡΠ΅Π°Π»ΡΠ½ΠΎΡΡΠΈ Π½Π΅Ρ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡΠ°, podβΠ° ΠΈΠ»ΠΈ ΠΌΠ°ΡΠΈΠ½Ρ Ρ ΡΠ°ΠΊΠΈΠΌ Π°Π΄ΡΠ΅ΡΠΎΠΌ) IP-Π°Π΄ΡΠ΅Ρ ΠΈΠ· Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° Π°Π΄ΡΠ΅ΡΠΎΠ² Π΄Π»Ρ ΡΠ°Π±ΠΎΡΡ Π²Π½ΡΡΡΠΈ, ΡΠΊΡΡΠ²Π°ΡΡΠΈΠΉ Π·Π° ΡΠΎΠ±ΠΎΠΉ IP-Π°Π΄ΡΠ΅ΡΠ° ΡΠ΅Π°Π»ΡΠ½ΡΡ POD-ΠΎΠ². Π‘Π΅ΡΠ²ΠΈΡΡ Π»ΡΠ±ΠΎΠ³ΠΎ ΡΠΈΠΏΠ° (ΠΊΡΠΎΠΌΠ΅ ExternalName) Π½Π°Π·Π½Π°ΡΠ°Π΅ΡΡΡ ΡΡΠΎΡ IP-Π°Π΄ΡΠ΅Ρ.
$ kubectl get services -n dev
output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
comment ClusterIP 10.11.248.46 <none> 9292/TCP 2d
comment-db ClusterIP 10.11.252.202 <none> 27017/TCP 2d
mongo ClusterIP 10.11.243.229 <none> 27017/TCP 2d
post ClusterIP 10.11.254.105 <none> 5000/TCP 2d
post-db ClusterIP 10.11.240.87 <none> 27017/TCP 2d
ui NodePort 10.11.245.7 <none> 9292:32092/TCP 2d
Π‘Ρ
Π΅ΠΌΠ° Π²Π·Π°ΠΈΠΌΠΎΠ΄Π΅ΠΉΡΡΠ²ΠΈΡ:
Service - ΡΡΠΎ Π»ΠΈΡΡ Π°Π±ΡΡΡΠ°ΠΊΡΠΈΡ ΠΈ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΡΠΎΠ³ΠΎ, ΠΊΠ°ΠΊ ΠΏΠΎΠ»ΡΡΠΈΡΡ Π΄ΠΎΡΡΡΠΏ ΠΊ ΡΠ΅ΡΠ²ΠΈΡΡ. ΠΠΎ ΠΎΠΏΠΈΡΠ°Π΅ΡΡΡ ΠΎΠ½Π° Π½Π° ΡΠ΅Π°Π»ΡΠ½ΡΠ΅ ΠΌΠ΅Ρ Π°Π½ΠΈΠ·ΠΌΡ ΠΈ ΠΎΠ±ΡΠ΅ΠΊΡΡ: DNS-ΡΠ΅ΡΠ²Π΅Ρ, Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊΠΈ, iptables.
ΠΠ»Ρ ΡΠΎΠ³ΠΎ, ΡΡΠΎΠ±Ρ Π΄ΠΎΠΉΡΠΈ Π΄ΠΎ ΡΠ΅ΡΠ²ΠΈΡΠ°, Π½Π°ΠΌ Π½ΡΠΆΠ½ΠΎ ΡΠ·Π½Π°ΡΡ Π΅Π³ΠΎ Π°Π΄ΡΠ΅Ρ ΠΏΠΎ ΠΈΠΌΠ΅Π½ΠΈ. Kubernetes Π½Π΅ ΠΈΠΌΠ΅Π΅Ρ ΡΠ²ΠΎΠ΅Π³ΠΎ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ DNS-ΡΠ΅ΡΠ²Π΅ΡΠ° Π΄Π»Ρ ΡΠ°Π·ΡΠ΅ΡΠ΅Π½ΠΈΡ ΠΈΠΌΠ΅Π½. ΠΠΎΡΡΠΎΠΌΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΠΏΠ»Π°Π³ΠΈΠ½ kube-dns (ΡΡΠΎ ΡΠΎΠΆΠ΅ Pod).
ΠΠ³ΠΎ Π·Π°Π΄Π°ΡΠΈ:
- Ρ ΠΎΠ΄ΠΈΡΡ Π² API Kubernetesβa ΠΈ ΠΎΡΡΠ»Π΅ΠΆΠΈΠ²Π°ΡΡ Service-ΠΎΠ±ΡΠ΅ΠΊΡΡ
- Π·Π°Π½ΠΎΡΠΈΡΡ DNS-Π·Π°ΠΏΠΈΡΠΈ ΠΎ ServiceβΠ°Ρ Π² ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΡΡ Π±Π°Π·Ρ
- ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Π»ΡΡΡ DNS-ΡΠ΅ΡΠ²ΠΈΡ Π΄Π»Ρ ΡΠ°Π·ΡΠ΅ΡΠ΅Π½ΠΈΡ ΠΈΠΌΠ΅Π½ Π² IP-Π°Π΄ΡΠ΅ΡΠ° (ΠΊΠ°ΠΊ Π²Π½ΡΡΡΠ΅Π½Π½ΠΈΡ , ΡΠ°ΠΊ ΠΈ Π²Π½Π΅ΡΠ½ΠΈΡ )
ΠΡΠΈ ΠΎΡΠΊΠ»ΡΡΠ΅Π½Π½ΠΎΠΌ kube-dns ΡΠ΅ΡΠ²ΠΈΡΠ΅ ΡΠ²ΡΠ·Π½ΠΎΡΡΡ ΠΌΠ΅ΠΆΠ΄Ρ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠ°ΠΌΠΈ reddit-app ΠΏΡΠΎΠΏΠ°Π΄Π΅Ρ ΠΈ ΠΎΠ½ ΠΏΠ΅ΡΠ΅ΡΡΠ°Π½Π΅Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡ.
ΠΡΠΎΡΠΊΠ΅ΠΉΠ»ΠΈΠΌ Π² 0 ΡΠ΅ΡΠ²ΠΈΡ, ΠΊΠΎΡΠΎΡΡΠΉ ΡΠ»Π΅Π΄ΠΈΡ, ΡΡΠΎΠ±Ρ dns-kube ΠΏΠΎΠ΄ΠΎΠ² Π²ΡΠ΅Π³Π΄Π° Ρ Π²Π°ΡΠ°Π»ΠΎ. ΠΠ½Π°Π»ΠΎΠ³ΠΈΡΠ½ΠΎ ΠΏΠΎΡΡΡΠΏΠΈΠΌ Ρ kube-dns:
$ kubectl scale deployment --replicas 0 -n kube-system kube-dns-autoscaler
$ kubectl scale deployment --replicas 0 -n kube-system kube-dns
ΠΡΠΏΠΎΠ»Π½ΠΈΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ:
kubectl exec -ti -n dev post-8ff9c4cb9-h4zpq ping comment
output:
ping: bad address 'comment'
command terminated with exit code 1
ΠΠ΅ΡΠ½Π΅ΠΌ Π°Π²ΡΠΎΡΠΊΠ΅Π»Π»Π΅Ρ:
kubectl scale deployment --replicas 1 -n kube-system kube-dns-autoscaler
ClusterIP - Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ ΠΈ Π½Π΅ ΠΏΡΠΈΠ½Π°Π΄Π»Π΅ΠΆΠΈΡ Π½ΠΈ ΠΎΠ΄Π½ΠΎΠΉ ΡΠ΅Π°Π»ΡΠ½ΠΎΠΉ ΡΠΈΠ·ΠΈΡΠ΅ΡΠΊΠΎΠΉ ΡΡΡΠ½ΠΎΡΡΠΈ. ΠΠ³ΠΎ ΡΡΠ΅Π½ΠΈΠ΅ΠΌ ΠΈ Π΄Π°Π»ΡΠ½Π΅ΠΉΡΠΈΠΌΠΈ Π΄Π΅ΠΉΡΡΠ²ΠΈΡΠΌΠΈ Ρ ΠΏΠ°ΠΊΠ΅ΡΠ°ΠΌΠΈ, ΠΏΡΠΈΠ½Π°Π΄Π»Π΅ΠΆΠ°ΡΠΈΠΌΠΈ Π΅ΠΌΡ, Π·Π°Π½ΠΈΠΌΠ°Π΅ΡΡΡ Π² Π½Π°ΡΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅ iptables, ΠΊΠΎΡΠΎΡΡΠΉ Π½Π°ΡΡΡΠ°ΠΈΠ²Π°Π΅ΡΡΡ ΡΡΠΈΠ»ΠΈΡΠΎΠΉ kube-proxy (Π·Π°Π±ΠΈΡΠ°ΡΡΠ΅ΠΉ ΠΈΠ½ΡΡ Ρ API-ΡΠ΅ΡΠ²Π΅ΡΠ°).
Π‘Π°ΠΌ kube-proxy, ΠΌΠΎΠΆΠ½ΠΎ Π½Π°ΡΡΡΠΎΠΈΡΡ Π½Π° ΠΏΡΠΈΠ΅ΠΌ ΡΡΠ°ΡΠΈΠΊΠ°, Π½ΠΎ ΡΡΠΎ ΡΡΡΠ°ΡΠ΅Π²ΡΠ΅Π΅ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ ΠΈ Π½Π΅ ΡΠ΅ΠΊΠΎΠΌΠ΅Π½Π΄ΡΠ΅ΡΡΡ Π΅Π³ΠΎ ΠΏΡΠΈΠΌΠ΅Π½ΡΡΡ.
ΠΠ΅Π·Π°Π²ΠΈΡΠΈΠΌΠΎ ΠΎΡ ΡΠΎΠ³ΠΎ, Π½Π° ΠΎΠ΄Π½ΠΎΠΉ Π½ΠΎΠ΄Π΅ Π½Π°Ρ ΠΎΠ΄ΡΡΡΡ ΠΏΠΎΠ΄Ρ ΠΈΠ»ΠΈ Π½Π° ΡΠ°Π·Π½ΡΡ - ΡΡΠ°ΡΠΈΠΊ ΠΏΡΠΎΡ ΠΎΠ΄ΠΈΡ ΡΠ΅ΡΠ΅Π· ΡΠ΅ΠΏΠΎΡΠΊΡ, ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½ΡΡ Π²ΡΡΠ΅. Kubernetes Π½Π΅ ΠΈΠΌΠ΅Π΅Ρ Π² ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΠ΅ ΠΌΠ΅Ρ Π°Π½ΠΈΠ·ΠΌΠ° ΠΎΡΠ³Π°Π½ΠΈΠ·Π°ΡΠΈΠΈ overlay ΡΠ΅ΡΠ΅ΠΉ (ΠΊΠ°ΠΊ Ρ Docker Swarm). ΠΠ½ Π»ΠΈΡΡ ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Π»ΡΠ΅Ρ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡ Π΄Π»Ρ ΡΡΠΎΠ³ΠΎ. ΠΠ»Ρ ΡΠΎΠ·Π΄Π°Π½ΠΈΡ Overlay-ΡΠ΅ΡΠ΅ΠΉ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ ΠΎΡΠ΄Π΅Π»ΡΠ½ΡΠ΅ Π°Π΄Π΄ΠΎΠ½Ρ: Weave, Calico, Flannel, β¦ .
Π Google Kontainer Engine (GKE) ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΡΠΉ ΠΏΠ»Π°Π³ΠΈΠ½ kubenet (ΠΎΠ½ - ΡΠ°ΡΡΡ kubelet). ΠΠ½ ΡΠ°Π±ΠΎΡΠ°Π΅Ρ ΡΠΎΠ»ΡΠΊΠΎ Π²ΠΌΠ΅ΡΡΠ΅ Ρ ΠΏΠ»Π°ΡΡΠΎΡΠΌΠΎΠΉ GCP ΠΈ, ΠΏΠΎ-ΡΡΡΠΈ Π·Π°Π½ΠΈΠΌΠ°Π΅ΡΡΡ ΡΠ΅ΠΌ, ΡΡΠΎ Π½Π°ΡΡΡΠ°ΠΈΠ²Π°Π΅Ρ google-ΡΠ΅ΡΠΈ Π΄Π»Ρ ΠΏΠ΅ΡΠ΅Π΄Π°ΡΠΈ ΡΡΠ°ΡΠΈΠΊΠ° Kubernetes. ΠΠΎΡΡΠΎΠΌΡ Π² ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΠΈ Docker ΡΠ΅ΠΉΡΠ°Ρ Π²Ρ Π½Π΅ ΡΠ²ΠΈΠ΄ΠΈΡΠ΅ Π½ΠΈΠΊΠ°ΠΊΠΈΡ Overlay-ΡΠ΅ΡΠ΅ΠΉ.
- NodePort - ΠΏΠΎΡ ΠΎΠΆ Π½Π° ΡΠ΅ΡΠ²ΠΈΡ ΡΠΈΠΏΠ° ClusterIP, ΡΠΎΠ»ΡΠΊΠΎ ΠΊ Π½Π΅ΠΌΡ ΠΏΡΠΈΠ±Π°Π²Π»ΡΠ΅ΡΡΡ ΠΏΡΠΎΡΠ»ΡΡΠΈΠ²Π°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠΎΠ² Π½ΠΎΠ΄ (Π²ΡΠ΅Ρ Π½ΠΎΠ΄) Π΄Π»Ρ Π΄ΠΎΡΡΡΠΏΠ° ΠΊ ΡΠ΅ΡΠ²ΠΈΡΠ°ΠΌ ΡΠ½Π°ΡΡΠΆΠΈ. ΠΡΠΈ ΡΡΠΎΠΌ ClusterIP ΡΠ°ΠΊΠΆΠ΅ Π½Π°Π·Π½Π°ΡΠ°Π΅ΡΡΡ ΡΡΠΎΠΌΡ ΡΠ΅ΡΠ²ΠΈΡΡ Π΄Π»Ρ Π΄ΠΎΡΡΡΠΏΠ° ΠΊ Π½Π΅ΠΌΡ ΠΈΠ·Π½ΡΡΡΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°. kube-proxy ΠΏΡΠΎΡΠ»ΡΡΠΈΠ²Π°Π΅ΡΡΡ Π»ΠΈΠ±ΠΎ Π·Π°Π΄Π°Π½Π½ΡΠΉ ΠΏΠΎΡΡ (nodePort: 32092), Π»ΠΈΠ±ΠΎ ΠΏΠΎΡΡ ΠΈΠ· Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° 30000-32670. ΠΠ°Π»ΡΡΠ΅ IPTables ΡΠ΅ΡΠ°Π΅Ρ, Π½Π° ΠΊΠ°ΠΊΠΎΠΉ Pod ΠΏΠΎΠΏΠ°Π΄Π΅Ρ ΡΡΠ°ΡΠΈΠΊ.
Π‘Π΅ΡΠ²ΠΈΡ UI ΡΠ°Π½ΡΡΠ΅ ΡΠΆΠ΅ Π±ΡΠ» ΠΎΠΏΡΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½ Π½Π°ΡΡΠΆΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ NodePort:
ui-service.yml
---
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
app: reddit
component: ui
spec:
type: NodePort
ports:
- port: 9292
nodePort: 32092
protocol: TCP
targetPort: 9292
selector:
app: reddit
component: ui
- LoadBalancer
NodePort Ρ ΠΎΡΡ ΠΈ ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Π»ΡΠ΅Ρ Π΄ΠΎΡΡΡΠΏ ΠΊ ΡΠ΅ΡΠ²ΠΈΡΡ ΡΠ½Π°ΡΡΠΆΠΈ, Π½ΠΎ ΠΎΡΠΊΡΡΠ²Π°ΡΡ Π²ΡΠ΅ ΠΏΠΎΡΡΡ Π½Π°ΡΡΠΆΡ ΠΈΠ»ΠΈ ΠΈΡΠΊΠ°ΡΡ IPΠ°Π΄ΡΠ΅ΡΠ° Π½Π°ΡΠΈΡ Π½ΠΎΠ΄ (ΠΊΠΎΡΠΎΡΡΠ΅ Π²ΠΎΠΎΠ±ΡΠ΅ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈΠ΅) Π½Π΅ ΠΎΡΠ΅Π½Ρ ΡΠ΄ΠΎΠ±Π½ΠΎ.
Π’ΠΈΠΏ LoadBalancer ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π½Π°ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π²Π½Π΅ΡΠ½ΠΈΠΉ ΠΎΠ±Π»Π°ΡΠ½ΡΠΉ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊ Π½Π°Π³ΡΡΠ·ΠΊΠΈ ΠΊΠ°ΠΊ Π΅Π΄ΠΈΠ½ΡΡ ΡΠΎΡΠΊΡ Π²Ρ ΠΎΠ΄Π° Π² Π½Π°ΡΠΈ ΡΠ΅ΡΠ²ΠΈΡΡ, Π° Π½Π΅ ΠΏΠΎΠ»Π°Π³Π°ΡΡΡΡ Π½Π° IPTables ΠΈ Π½Π΅ ΠΎΡΠΊΡΡΠ²Π°ΡΡ Π½Π°ΡΡΠΆΡ Π²Π΅ΡΡ ΠΊΠ»Π°ΡΡΠ΅Ρ.
ΠΠ°ΡΡΡΠΎΠΈΠΌ ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΡΡΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ Service UI:
ui-service.yml
---
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
app: reddit
component: ui
spec:
type: LoadBalancer
ports:
- port: 80 # ΠΠΎΡΡ, ΠΊΠΎΡΠΎΡΡΠΉ Π±ΡΠ΄Π΅Ρ ΠΎΡΠΊΡΡΡ Π½Π° Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊΠ΅
nodePort: 32092 #Π’Π°ΠΊΠΆΠ΅ Π½Π° Π½ΠΎΠ΄Π΅ Π±ΡΠ΄Π΅Ρ ΠΎΡΠΊΡΡΡ ΠΏΠΎΡΡ, Π½ΠΎ Π½Π°ΠΌ ΠΎΠ½ Π½Π΅ Π½ΡΠΆΠ΅Π½ ΠΈ Π΅Π³ΠΎ ΠΌΠΎΠΆΠ½ΠΎ Π΄Π°ΠΆΠ΅ ΡΠ±ΡΠ°ΡΡ
protocol: TCP
targetPort: 9292 #ΠΠΎΡΡ POD-Π°
selector:
app: reddit
component: ui
ΠΡΠΈΠΌΠ΅Π½ΠΈΠΌ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΡ: $ kubectl apply -f ui-service.yml -n dev
ΠΡΠΎΠ²Π΅ΡΠΈΠΌ: $ kubectl get service -n dev --selector component=ui
output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ui LoadBalancer 10.11.245.7 35.222.133.XXX 80:32092/TCP 2d
ΠΠ°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠ° Ρ ΠΏΠΎΠΌΠΎΡΡΡ Service ΡΠΈΠΏΠ° LoadBalancing ΠΈΠΌΠ΅Π΅Ρ ΡΡΠ΄ Π½Π΅Π΄ΠΎΡΡΠ°ΡΠΊΠΎΠ²:
- Π½Π΅Π»ΡΠ·Ρ ΡΠΏΡΠ°Π²Π»ΡΡΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ http URI (L7-Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠ°)
- ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ ΡΠΎΠ»ΡΠΊΠΎ ΠΎΠ±Π»Π°ΡΠ½ΡΠ΅ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊΠΈ (AWS, GCP)
- Π½Π΅Ρ Π³ΠΈΠ±ΠΊΠΈΡ ΠΏΡΠ°Π²ΠΈΠ» ΡΠ°Π±ΠΎΡΡ Ρ ΡΡΠ°ΡΠΈΠΊΠΎΠΌ
- Ingress
ΠΠ»Ρ Π±ΠΎΠ»Π΅Π΅ ΡΠ΄ΠΎΠ±Π½ΠΎΠ³ΠΎ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ Π²Ρ ΠΎΠ΄ΡΡΠΈΠΌ ΡΠ½Π°ΡΡΠΆΠΈ ΡΡΠ°ΡΠΈΠΊΠΎΠΌ ΠΈ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π½Π΅Π΄ΠΎΡΡΠ°ΡΠΊΠΎΠ² LoadBalancer ΠΌΠΎΠΆΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π΄ΡΡΠ³ΠΎΠΉ ΠΎΠ±ΡΠ΅ΠΊΡ Kubernetes - Ingress.
Ingress β ΡΡΠΎ Π½Π°Π±ΠΎΡ ΠΏΡΠ°Π²ΠΈΠ» Π²Π½ΡΡΡΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ° Kubernetes, ΠΏΡΠ΅Π΄Π½Π°Π·Π½Π°ΡΠ΅Π½Π½ΡΡ Π΄Π»Ρ ΡΠΎΠ³ΠΎ, ΡΡΠΎΠ±Ρ Π²Ρ ΠΎΠ΄ΡΡΠΈΠ΅ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ ΠΌΠΎΠ³Π»ΠΈ Π΄ΠΎΡΡΠΈΡΡ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² (Services).
Π‘Π°ΠΌΠΈ ΠΏΠΎ ΡΠ΅Π±Π΅ IngressβΡ ΡΡΠΎ ΠΏΡΠΎΡΡΠΎ ΠΏΡΠ°Π²ΠΈΠ»Π°. ΠΠ»Ρ ΠΈΡ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ Π½ΡΠΆΠ΅Π½ Ingress Controller.
- Ingress Conroller
Π ΠΎΡΠ»ΠΈΡΠΈΠ΅ ΠΎΡΡΠ°Π»ΡΠ½ΡΡ ΠΊΠΎΠ½ΡΡΠΎΠ»Π»Π΅ΡΠΎΠ² k8s - ΠΎΠ½ Π½Π΅ ΡΡΠ°ΡΡΡΠ΅Ρ Π²ΠΌΠ΅ΡΡΠ΅ Ρ ΠΊΠ»Π°ΡΡΠ΅ΡΠΎΠΌ.
Ingress Controller - ΡΡΠΎ ΡΠΊΠΎΡΠ΅Π΅ ΠΏΠ»Π°Π³ΠΈΠ½ (Π° Π·Π½Π°ΡΠΈΡ ΠΈ ΠΎΡΠ΄Π΅Π»ΡΠ½ΡΠΉ POD), ΠΊΠΎΡΠΎΡΡΠΉ ΡΠΎΡΡΠΎΠΈΡ ΠΈΠ· 2-Ρ ΡΡΠ½ΠΊΡΠΈΠΎΠ½Π°Π»ΡΠ½ΡΡ ΡΠ°ΡΡΠ΅ΠΉ:
- ΠΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅, ΠΊΠΎΡΠΎΡΠΎΠ΅ ΠΎΡΡΠ»Π΅ΠΆΠΈΠ²Π°Π΅Ρ ΡΠ΅ΡΠ΅Π· k8s API Π½ΠΎΠ²ΡΠ΅ ΠΎΠ±ΡΠ΅ΠΊΡΡ Ingress ΠΈ ΠΎΠ±Π½ΠΎΠ²Π»ΡΠ΅Ρ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΡ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊΠ°
- ΠΠ°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊ (Nginx, haproxy, traefik,β¦), ΠΊΠΎΡΠΎΡΡΠΉ ΠΈ Π·Π°Π½ΠΈΠΌΠ°Π΅ΡΡΡ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΠ΅ΠΌ ΡΠ΅ΡΠ΅Π²ΡΠΌ ΡΡΠ°ΡΠΈΠΊΠΎΠΌ
ΠΡΠ½ΠΎΠ²Π½ΡΠ΅ Π·Π°Π΄Π°ΡΠΈ, ΡΠ΅ΡΠ°Π΅ΠΌΡΠ΅ Ρ ΠΏΠΎΠΌΠΎΡΡΡ IngressβΠΎΠ²:
- ΠΡΠ³Π°Π½ΠΈΠ·Π°ΡΠΈΡ Π΅Π΄ΠΈΠ½ΠΎΠΉ ΡΠΎΡΠΊΠΈ Π²Ρ ΠΎΠ΄Π° Π² ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΡΠ½Π°ΡΡΠΆΠΈ
- ΠΠ±Π΅ΡΠΏΠ΅ΡΠ΅Π½ΠΈΠ΅ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠΈ ΡΡΠ°ΡΠΈΠΊΠ°
- Π’Π΅ΡΠΌΠΈΠ½Π°ΡΠΈΡ SSL
- ΠΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ Ρ ΠΎΡΡΠΈΠ½Π³ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΈΠΌΠ΅Π½ ΠΈ Ρ.Π΄
ΠΠΎΡΠΊΠΎΠ»ΡΠΊΠΎ Ρ Π½Π°Ρ web-ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅, Π½Π°ΠΌ Π²ΠΏΠΎΠ»Π½Π΅ Π±ΡΠ»ΠΎ Π±Ρ Π»ΠΎΠ³ΠΈΡΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ L7-Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊ Π²ΠΌΠ΅ΡΡΠΎ Service LoadBalancer.
Google Π² GKE ΡΠΆΠ΅ ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Π»ΡΠ΅Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΈΡ ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΡΠ΅ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΡΠΈΠΊ Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ Ingress controller-ΠΎΠ².
Π£Π±Π΅Π΄ΠΈΠΌΡΡ, ΡΡΠΎ Π²ΡΡΡΠΎΠ΅Π½Π½ΡΠΉ Ingress Π²ΠΊΠ»ΡΡΠ΅Π½:
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Ingress Π΄Π»Ρ ΡΠ΅ΡΠ²ΠΈΡΠ° UI:
ui-ingress.yml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ui
spec:
backend:
serviceName: ui
servicePort: 80
ΠΡΠΎ Singe Service Ingress - Π·Π½Π°ΡΠΈΡ, ΡΡΠΎ Π²Π΅ΡΡ ingress ΠΊΠΎΠ½ΡΡΠΎΠ»Π»Π΅Ρ Π±ΡΠ΄Π΅Ρ ΠΏΡΠΎΡΡΠΎ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²Π°ΡΡ Π½Π°Π³ΡΡΠ·ΠΊΡ Π½Π° Node-Ρ Π΄Π»Ρ ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΡΠ΅ΡΠ²ΠΈΡΠ° (ΠΎΡΠ΅Π½Ρ ΠΏΠΎΡ ΠΎΠΆΠ΅ Π½Π° Service LoadBalancer).
$ kubectl apply -f ui-ingress.yml -n dev
ΠΠΎΡΠΌΠΎΡΡΠΈΠΌ Π² ΠΊΠ»Π°ΡΡΠ΅Ρ: kubectl get ingress -n dev
output
NAME HOSTS ADDRESS PORTS AGE
ui * 35.201.126.XXX 80 2m
Π’Π΅ΠΏΠ΅ΡΡ ΡΡ Π΅ΠΌΠ° Π²ΡΠ³Π»ΡΠ΄ΠΈΡ ΡΠ°ΠΊ:
Π ΡΠ΅ΠΊΡΡΠ΅ΠΉ ΡΡ Π΅ΠΌΠ΅ Π΅ΡΡΡ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ Π½Π΅Π΄ΠΎΡΡΠ°ΡΠΊΠΎΠ²:
- Ρ Π½Π°Ρ 2 Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊΠ° Π΄Π»Ρ 1 ΡΠ΅ΡΠ²ΠΈΡΠ°
- ΠΡ Π½Π΅ ΡΠΌΠ΅Π΅ΠΌ ΡΠΏΡΠ°Π²Π»ΡΡΡ ΡΡΠ°ΡΠΈΠΊΠΎΠΌ Π½Π° ΡΡΠΎΠ²Π½Π΅ HTTP
ΠΠ΄ΠΈΠ½ ΠΈΠ· Π±Π°Π»Π°Π½ΡΠΈΡΠΎΡΠΈΠΊΠΎΠ² ΠΌΠΎΠΆΠ½ΠΎ ΡΠ±ΡΠ°ΡΡ. ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΡΠ΅ΡΠ²ΠΈΡ UI:
ui-service.yml
---
apiVersion: v1
kind: Service
metadata:
...
spec:
type: NodePort #Π·Π°ΠΌΠ΅Π½ΠΈΠΌ Π½Π° NodePort
ports:
- port: 9292
protocol: TCP
targetPort: 9292
selector:
app: reddit
component: ui
$ kubectl apply -f β¦ -n dev
Π΄Π»Ρ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ Π½Π°ΡΡΡΠΎΠ΅ΠΊ.
ΠΠ°ΡΡΠ°Π²ΠΈΠΌ ΡΠ°Π±ΠΎΡΠ°ΡΡ Ingress Controller ΠΊΠ°ΠΊ ΠΊΠ»Π°ΡΡΠΈΡΠ΅ΡΠΊΠΈΠΉ Π²Π΅Π±:
ui-ingress.yml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ui
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ui
servicePort: 9292
$ kubectl apply -f ui-ingress.yml -n dev
Π΄Π»Ρ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ
- Secret
ΠΠ°ΡΠΈΡΠΈΠΌ Π½Π°Ρ ΡΠ΅ΡΠ²ΠΈΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ TLS. ΠΠ°ΠΉΠ΄Π΅ΠΌ Π½Π°Ρ IP: $ kubectl get ingress -n dev
output
NAME HOSTS ADDRESS PORTS AGE
ui * 35.201.126.86 80 1d
ΠΠΎΠ΄Π³ΠΎΡΠΎΠ²ΠΈΠΌ ΡΠ΅ΡΡΠΈΡΠΈΠΊΠ°Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ IP ΠΊΠ°ΠΊ CN:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=35.201.126.86"
ΠΠ°Π³ΡΡΠ·ΠΊΠ° ΡΠ΅ΡΡΠΈΡΠΈΠΊΠ°ΡΠ° Π² ΠΊΠ»Π°ΡΡΠ΅Ρ:
$ kubectl create secret tls ui-ingress --key tls.key --cert tls.crt -n dev
ΠΡΠΎΠ²Π΅ΡΠΊΠ° Π½Π°Π»ΠΈΡΠΈΡ ΡΠ΅ΡΡΠΈΡΠΈΠΊΠ°ΡΠ°:
$ kubectl describe secret ui-ingress -n dev
output
Name: ui-ingress
Namespace: dev
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 989 bytes
tls.key: 1704 bytes
ΠΠ°ΡΡΡΠΎΠΈΠΌ Ingress Π½Π° ΠΏΡΠΈΠ΅ΠΌ ΡΠΎΠ»ΡΠΊΠΎ HTTPS ΡΡΠ°ΡΠΈΠΊΠ°:
ui-ingress.yml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ui
annotations:
kubernetes.io/ingress.allow-http: "false" #ΠΎΡΠΊΠ»ΡΡΠ°Π΅ΠΌ ΠΏΡΠΎΠ±ΡΠΎΡ http
spec:
tls:
- secretName: ui-ingress #ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ°Π΅ΠΌ ΡΠ΅ΡΡΠΈΡΠΈΠΊΠ°Ρ
backend:
serviceName: ui
servicePort: 9292
$ kubectl apply -f ui-ingress.yml -n dev
Π΄Π»Ρ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ.
ΠΠ΅ΡΠ΅ΠΉΠ΄Π΅ΠΌ Π½Π° ΡΡΡΠ°Π½ΠΈΡΡ load-balancer'Π°:
ΠΠΈΠ΄ΠΈΠΌ, ΡΡΠΎ Ρ Π½Π°Ρ Π²ΡΠ΅ Π΅ΡΠ΅ http load balacer. ΠΡΡΡΠ½ΡΡ ΡΠ΄Π°Π»ΠΈΠΌ ΠΈ ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ load balancer:
$ kubectl delete ingress ui -n dev
$ kubectl apply -f ui-ingress.yml -n dev
- Network Policy
Π Π°Π½Π΅Π΅ ΠΌΡ ΠΏΡΠΈΠ½ΡΠ»ΠΈ ΡΠ»Π΅Π΄ΡΡΡΡΡ ΡΡ Π΅ΠΌΡ ΡΠ΅ΡΠ΅ΠΉ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ²:
Π Kubernetes Ρ Π½Π°Ρ ΡΠ°ΠΊ ΡΠ΄Π΅Π»Π°ΡΡ Π½Π΅ ΠΏΠΎΠ»ΡΡΠΈΡΡΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΠΎΡΠ΄Π΅Π»ΡΠ½ΡΡ ΡΠ΅ΡΠ΅ΠΉ, ΡΠ°ΠΊ ΠΊΠ°ΠΊ Π²ΡΠ΅ POD-Ρ ΠΌΠΎΠ³ΡΡ Π΄ΠΎΡΡΡΡΠ°ΡΡΡΡ Π΄ΡΡΠ³ Π΄ΠΎ Π΄ΡΡΠ³Π° ΠΏΠΎ-ΡΠΌΠΎΠ»ΡΠ°Π½ΠΈΡ.
NetworkPolicy - ΠΈΠ½ΡΡΡΡΠΌΠ΅Π½Ρ Π΄Π»Ρ Π΄Π΅ΠΊΠ»Π°ΡΠ°ΡΠΈΠ²Π½ΠΎΠ³ΠΎ ΠΎΠΏΠΈΡΠ°Π½ΠΈΡ ΠΏΠΎΡΠΎΠΊΠΎΠ² ΡΡΠ°ΡΠΈΠΊΠ°. ΠΠ΅ Π²ΡΠ΅ ΡΠ΅ΡΠ΅Π²ΡΠ΅ ΠΏΠ»Π°Π³ΠΈΠ½Ρ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°ΡΡ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠΈ ΡΠ΅ΡΠΈ. Π ΡΠ°ΡΡΠ½ΠΎΡΡΠΈ, Ρ GKE ΡΡΠ° ΡΡΠ½ΠΊΡΠΈΡ ΠΏΠΎΠΊΠ° Π² Beta-ΡΠ΅ΡΡΠ΅ ΠΈ Π΄Π»Ρ Π΅Ρ ΡΠ°Π±ΠΎΡΡ ΠΎΡΠ΄Π΅Π»ΡΠ½ΠΎ Π±ΡΠ΄Π΅Ρ Π²ΠΊΠ»ΡΡΠ΅Π½ ΡΠ΅ΡΠ΅Π²ΠΎΠΉ ΠΏΠ»Π°Π³ΠΈΠ½ Calico (Π²ΠΌΠ΅ΡΡΠΎ Kubenet).
ΠΠ°Π²Π°ΠΉΡΠ΅ Π΅Π΅ ΠΏΡΠΎΡΠ΅ΡΡΡΡΠ΅ΠΌ. ΠΠ°ΡΠ° Π·Π°Π΄Π°ΡΠ° - ΠΎΠ³ΡΠ°Π½ΠΈΡΠΈΡΡ ΡΡΠ°ΡΠΈΠΊ, ΠΏΠΎΡΡΡΠΏΠ°ΡΡΠΈΠΉ Π½Π° mongodb ΠΎΡΠΎΠ²ΡΡΠ΄Ρ, ΠΊΡΠΎΠΌΠ΅ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² post ΠΈ comment.
ΠΠ°ΠΉΠ΄Π΅ΠΌ ΠΈΠΌΡ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°:
$ gcloud beta container clusters list
output
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
standard-cluster-1 us-central1-a 1.10.11-gke.1 35.202.73.52 g1-small 1.10.9-gke.5 * 2 RUNNING
ΠΠΊΠ»ΡΡΠΈΠΌ network-policy Π΄Π»Ρ GKE:
gcloud beta container clusters update standard-cluster-1 --zone=us-central1-a --update-addons=NetworkPolicy=ENABLED
gcloud beta container clusters update standard-cluster-1 --zone=us-central1-a --enable-network-policy
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ network policy Π΄Π»Ρ mongo:
mongo-network-policy.yml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-db-traffic
labels:
app: reddit
spec:
podSelector: # Π²ΡΠ±ΠΈΡΠ°Π΅ΠΌ ΠΎΠ±ΡΠ΅ΠΊΡΡ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠΈ (POD'Ρ Ρ mongodb)
matchLabels:
app: reddit
component: mongo
policyTypes: # Π±Π»ΠΎΠΊ Π·Π°ΠΏΡΠ΅ΡΠ°ΡΡΠΈΡ
Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ΠΈΠΉ: ΠΠ°ΠΏΡΠ΅ΡΠ°Π΅ΠΌ Π²ΡΠ΅ Π²Ρ
ΠΎΠ΄ΡΡΠΈΠ΅ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ
- Ingress # ΠΡΡ
ΠΎΠ΄ΡΡΠΈΠ΅ ΡΠ°Π·ΡΠ΅ΡΠ΅Π½Ρ
ingress: # Π±Π»ΠΎΠΊ ΡΠ°Π·ΡΠ΅ΡΠ°ΡΡΠΈΡ
Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ΠΈΠΉ
- from: # (Π±Π΅Π»ΡΠΉ ΡΠΏΠΈΡΠΎΠΊ)
- podSelector:
matchLabels:
app: reddit # Π Π°Π·ΡΠ΅ΡΠ°Π΅ΠΌ Π²ΡΠ΅ Π²Ρ
ΠΎΠ΄ΡΡΠΈΠ΅ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ ΠΎΡ
component: comment # POD-ΠΎΠ² Ρ label-Π°ΠΌΠΈ comment
ΠΡΠΈΠΌΠ΅Π½ΡΠ΅ΠΌ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΡ: $ kubectl apply -f mongo-network-policy.yml -n dev
ΠΠ»Ρ Π΄ΠΎΡΡΡΠΏΠ° post-ΡΠ΅ΡΠ²ΠΈΡΠ° Π² Π±Π°Π·Ρ Π΄Π°Π½Π½ΡΡ Π΄ΠΎΠ±Π°Π²ΠΈΠΌ:
mongo-network-policy.yml
...
- podSelector:
matchLabels:
app: reddit
component: post
- Π₯ΡΠ°Π½ΠΈΠ»ΠΈΡΠ΅ Π΄Π»Ρ Π±Π°Π· Π΄Π°Π½Π½ΡΡ
ΠΡΠ½ΠΎΠ²Π½ΠΎΠΉ Stateful ΡΠ΅ΡΠ²ΠΈΡ Π² Π½Π°ΡΠ΅ΠΌ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΈ - ΡΡΠΎ Π±Π°Π·Π° Π΄Π°Π½Π½ΡΡ MongoDB. Π ΡΠ΅ΠΊΡΡΠΈΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ ΠΎΠ½Π° Π·Π°ΠΏΡΡΠΊΠ°Π΅ΡΡΡ Π² Π²ΠΈΠ΄Π΅ Deployment ΠΈ Ρ ΡΠ°Π½ΠΈΡ Π΄Π°Π½Π½ΡΠ΅ Π² ΡΡΠ°Π΄Π½Π°ΡΡΠ½ΡΠΉ Docker Volume-Π°Ρ . ΠΡΠΎ ΠΈΠΌΠ΅Π΅Ρ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΠΏΡΠΎΠ±Π»Π΅ΠΌ:
- ΠΏΡΠΈ ΡΠ΄Π°Π»Π΅Π½ΠΈΠΈ POD-Π° ΡΠ΄Π°Π»ΡΠ΅ΡΡΡ ΠΈ Volume
- ΠΏΠΎΡΠ΅ΡΡ NodβΡ Ρ mongo Π³ΡΠΎΠ·ΠΈΡ ΠΏΠΎΡΠ΅ΡΠ΅ΠΉ Π΄Π°Π½Π½ΡΡ
- Π·Π°ΠΏΡΡΠΊ Π±Π°Π·Ρ Π½Π° Π΄ΡΡΠ³ΠΎΠΉ Π½ΠΎΠ΄Π΅ Π·Π°ΠΏΡΡΠΊΠ°Π΅Ρ Π½ΠΎΠ²ΡΠΉ ΡΠΊΠ·Π΅ΠΌΠΏΠ»ΡΡ Π΄Π°Π½Π½ΡΡ
mongo-deployment.yml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mongo
...
spec:
containers:
- image: mongo:3.2
name: mongo
volumeMounts: # ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ°Π΅ΠΌ Volume
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage # ΠΎΠ±ΡΡΠ²Π»ΡΠ΅ΠΌ Volume
emptyDir: {}
Π‘Π΅ΠΉΡΠ°Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΡΠΈΠΏ Volume emptyDir. ΠΡΠΈ ΡΠΎΠ·Π΄Π°Π½ΠΈΠΈ ΠΏΠΎΠ΄Π° Ρ ΡΠ°ΠΊΠΈΠΌ ΡΠΈΠΏΠΎΠΌ ΠΏΡΠΎΡΡΠΎ ΡΠΎΠ·Π΄Π°Π΅ΡΡΡ ΠΏΡΡΡΠΎΠΉ docker volume. ΠΡΠΈ ΠΎΡΡΠ°Π½ΠΎΠ²ΠΊΠ΅ PODβa ΡΠΎΠ΄Π΅ΡΠΆΠΈΠΌΠΎΠ΅ emtpyDir ΡΠ΄Π°Π»ΠΈΡΡΡ Π½Π°Π²ΡΠ΅Π³Π΄Π°. Π₯ΠΎΡΡ Π² ΠΎΠ±ΡΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅ ΠΏΠ°Π΄Π΅Π½ΠΈΠ΅ PODβa Π½Π΅ Π²ΡΠ·ΡΠ²Π°Π΅Ρ ΡΠ΄Π°Π»Π΅Π½ΠΈΡ Volumeβa. ΠΠΌΠ΅ΡΡΠΎ ΡΠΎΠ³ΠΎ, ΡΡΠΎΠ±Ρ Ρ ΡΠ°Π½ΠΈΡΡ Π΄Π°Π½Π½ΡΠ΅ Π»ΠΎΠΊΠ°Π»ΡΠ½ΠΎ Π½Π° Π½ΠΎΠ΄Π΅, ΠΈΠΌΠ΅Π΅Ρ ΡΠΌΡΡΠ» ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠΈΡΡ ΡΠ΄Π°Π»Π΅Π½Π½ΠΎΠ΅ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ΅. Π Π½Π°ΡΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅ ΠΌΠΎΠΆΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Volume gcePersistentDisk, ΠΊΠΎΡΠΎΡΡΠΉ Π±ΡΠ΄Π΅Ρ ΡΠΊΠ»Π°Π΄ΡΠ²Π°ΡΡ Π΄Π°Π½Π½ΡΠ΅ Π² Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ΅ GCE.
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ Π΄ΠΈΡΠΊ Π² Google Cloud:
$ gcloud compute disks create --size=25GB --zone=us-central1-a reddit-mongo-disk
ΠΠΎΠ±Π°Π²ΠΈΠΌ Π½ΠΎΠ²ΡΠΉ Volume POD-Ρ Π±Π°Π·Ρ:
mongo-deployment.yml
---
apiVersion: apps/v1beta1
kind: Deployment
...
spec:
containers:
- image: mongo:3.2
name: mongo
volumeMounts:
- name: mongo-gce-pd-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
emptyDir: {}
volumes:
- name: mongo-gce-pd-storage
gcePersistentDisk:
pdName: reddit-mongo-disk # ΠΌΠ΅Π½ΡΠ΅ΠΌ Volume Π½Π° Π΄ΡΡΠ³ΠΎΠΉ ΡΠΈΠΏ
fsType: ext4
ΠΠΎΠ½ΡΠΈΡΡΠ΅ΠΌ Π²ΡΠ΄Π΅Π»Π΅Π½Π½ΡΠΉ Π΄ΠΈΡΠΊ ΠΊ POD'Ρ mongo:
kubectl apply -f mongo-deployment.yml -n dev
ΠΠΎΠΆΠ΄Π΅ΠΌΡΡ ΠΏΠ΅ΡΠ΅ΡΠΎΠ·Π΄Π°Π½ΠΈΡ POD'a (ΠΌΠΎΠΆΠ΅Ρ Π·Π°Π½ΡΡΡ Π΄ΠΎ 10 ΠΌΠΈΠ½ΡΡ).
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΠΏΠΎΡΡ:
ΠΠ΅ΡΠ΅ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ mongo-deployment:
$ kubectl delete deploy mongo -n dev
$ kubectl apply -f mongo-deployment.yml -n dev
ΠΠΎΡΡΡ ΠΎΡΡΠ°Π½ΡΡΡΡ Π½Π° ΠΌΠ΅ΡΡΠ΅.
- PersistentVolume
ΠΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΠΉ ΠΌΠ΅Ρ Π°Π½ΠΈΠ·ΠΌ Volume-ΠΎΠ² ΠΌΠΎΠΆΠ½ΠΎ ΡΠ΄Π΅Π»Π°ΡΡ ΡΠ΄ΠΎΠ±Π½Π΅Π΅. ΠΡ ΠΌΠΎΠΆΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π½Π΅ ΡΠ΅Π»ΡΠΉ Π²ΡΠ΄Π΅Π»Π΅Π½Π½ΡΠΉ Π΄ΠΈΡΠΊ Π΄Π»Ρ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Π°, Π° ΡΠ΅Π»ΡΠΉ ΡΠ΅ΡΡΡΡ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ°, ΠΎΠ±ΡΠΈΠΉ Π΄Π»Ρ Π²ΡΠ΅Π³ΠΎ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°. Π’ΠΎΠ³Π΄Π° ΠΏΡΠΈ Π·Π°ΠΏΡΡΠΊΠ΅ Stateful-Π·Π°Π΄Π°Ρ Π² ΠΊΠ»Π°ΡΡΠ΅ΡΠ΅, ΠΌΡ ΡΠΌΠΎΠΆΠ΅ΠΌ Π·Π°ΠΏΡΠΎΡΠΈΡΡ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ΅ Π² Π²ΠΈΠ΄Π΅ ΡΠ°ΠΊΠΎΠ³ΠΎ ΠΆΠ΅ ΡΠ΅ΡΡΡΡΠ°, ΠΊΠ°ΠΊ CPU ΠΈΠ»ΠΈ ΠΎΠΏΠ΅ΡΠ°ΡΠΈΠ²Π½Π°Ρ ΠΏΠ°ΠΌΡΡΡ. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ Π±ΡΠ΄Π΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΌΠ΅Ρ Π°Π½ΠΈΠ·ΠΌ PersistentVolume.
ΠΠΏΠΈΡΠ°Π½ΠΈΠ΅ PersistentVolume:
mongo-volume.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: reddit-mongo-disk # ΠΠΌΡ PersistentVolume'Π°
spec:
capacity:
storage: 25Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
gcePersistentDisk:
fsType: "ext4"
pdName: "reddit-mongo-disk" # ΠΠΌΡ Π΄ΠΈΡΠΊΠ° Π² GCE
ΠΠΎΠ±Π°Π²ΠΈΠΌ PersistentVolume Π² ΠΊΠ»Π°ΡΡΠ΅Ρ: $ kubectl apply -f mongo-volume.yml -n dev
ΠΡ ΡΠΎΠ·Π΄Π°Π»ΠΈ PersistentVolume Π² Π²ΠΈΠ΄Π΅ Π΄ΠΈΡΠΊΠ° Π² GCP:
- PersistentVolumeClaim
ΠΡ ΡΠΎΠ·Π΄Π°Π»ΠΈ ΡΠ΅ΡΡΡΡ Π΄ΠΈΡΠΊΠΎΠ²ΠΎΠ³ΠΎ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ°, ΡΠ°ΡΠΏΡΠΎΡΡΡΠ°Π½Π΅Π½Π½ΡΠΉ Π½Π° Π²Π΅ΡΡ ΠΊΠ»Π°ΡΡΠ΅Ρ, Π² Π²ΠΈΠ΄Π΅ PersistentVolume. Π§ΡΠΎΠ±Ρ Π²ΡΠ΄Π΅Π»ΠΈΡΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΡΠ°ΡΡΡ ΡΠ°ΠΊΠΎΠ³ΠΎ ΡΠ΅ΡΡΡΡΠ° - Π½ΡΠΆΠ½ΠΎ ΡΠΎΠ·Π΄Π°ΡΡ Π·Π°ΠΏΡΠΎΡ Π½Π° Π²ΡΠ΄Π°ΡΡ - PersistentVolumeClaim. Claim - ΡΡΠΎ ΠΈΠΌΠ΅Π½Π½ΠΎ Π·Π°ΠΏΡΠΎΡ, Π° Π½Π΅ ΡΠ°ΠΌΠΎ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ΅.
Π‘ ΠΏΠΎΠΌΠΎΡΡΡ Π·Π°ΠΏΡΠΎΡΠ° ΠΌΠΎΠΆΠ½ΠΎ Π²ΡΠ΄Π΅Π»ΠΈΡΡ ΠΌΠ΅ΡΡΠΎ ΠΊΠ°ΠΊ ΠΈΠ· ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΠΎΠ³ΠΎ PersistentVolume (ΡΠΎΠ³Π΄Π° ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡ accessModes ΠΈ StorageClass Π΄ΠΎΠ»ΠΆΠ½Ρ ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΠΎΠ²Π°ΡΡ, Π° ΠΌΠ΅ΡΡΠ° Π΄ΠΎΠ»ΠΆΠ½ΠΎ Ρ Π²Π°ΡΠ°ΡΡ), ΡΠ°ΠΊ ΠΈ ΠΏΡΠΎΡΡΠΎ ΡΠΎΠ·Π΄Π°ΡΡ ΠΎΡΠ΄Π΅Π»ΡΠ½ΡΠΉ PersistentVolume ΠΏΠΎΠ΄ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΡΠΉ Π·Π°ΠΏΡΠΎΡ.
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ PersistentVolumeClaim (PVC):
mongo-claim.yml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-pvc # ΠΠΌΡ PersistentVolumeClame'Π°
spec:
accessModes:
- ReadWriteOnce # accessMode Ρ PVC ΠΈ Ρ PV Π΄ΠΎΠ»ΠΆΠ΅Π½ ΡΠΎΠ²ΠΏΠ°Π΄Π°ΡΡ
resources:
requests:
storage: 15Gi
ΠΡΠΈΠΌΠ΅Π½ΠΈΠΌ: kubectl apply -f mongo-claim.yml -n dev
ΠΡ Π²ΡΠ΄Π΅Π»ΠΈΠ»ΠΈ ΠΌΠ΅ΡΡΠΎ Π² PV ΠΏΠΎ Π·Π°ΠΏΡΠΎΡΡ Π΄Π»Ρ Π½Π°ΡΠ΅ΠΉ Π±Π°Π·Ρ. ΠΠ΄Π½ΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΠΎΠ΄ΠΈΠ½ PV ΠΌΠΎΠΆΠ½ΠΎ ΡΠΎΠ»ΡΠΊΠΎ ΠΏΠΎ ΠΎΠ΄Π½ΠΎΠΌΡ ClaimβΡ.
ΠΡΠ»ΠΈ Claim Π½Π΅ Π½Π°ΠΉΠ΄Π΅Ρ ΠΏΠΎ Π·Π°Π΄Π°Π½Π½ΡΠΌ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΠ°ΠΌ PV Π²Π½ΡΡΡΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°, Π»ΠΈΠ±ΠΎ ΡΠΎΡ Π±ΡΠ΄Π΅Ρ Π·Π°Π½ΡΡ Π΄ΡΡΠ³ΠΈΠΌ ClaimβΠΎΠΌ ΡΠΎ ΠΎΠ½ ΡΠ°ΠΌ ΡΠΎΠ·Π΄Π°ΡΡ Π½ΡΠΆΠ½ΡΠΉ Π΅ΠΌΡ PV Π²ΠΎΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π²ΡΠΈΡΡ ΡΡΠ°Π½Π΄Π°ΡΡΠ½ΡΠΌ StorageClass.
$ kubectl describe storageclass standard -n dev
output
Name: standard
IsDefaultClass: Yes
Annotations: storageclass.beta.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/gce-pd
Parameters: type=pd-standard
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Π Π½Π°ΡΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅ ΡΡΠΎ ΠΎΠ±ΡΡΠ½ΡΠΉ ΠΌΠ΅Π΄Π»Π΅Π½Π½ΡΠΉ Google Cloud Persistent Drive:
ΠΠΎΠ΄ΠΊΠ»ΡΡΠΈΠΌ PVC ΠΊ Π½Π°ΡΠΈΠΌ Pod'Π°ΠΌ:
mongo-deployment.yml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mongo
...
spec:
containers:
- image: mongo:3.2
name: mongo
volumeMounts:
- name: mongo-gce-pd-storage
mountPath: /data/db
volumes:
- name: mongo-gce-pd-storage # ΠΠΌΡ PersistentVolumeClame'Π°
persistentVolumeClaim:
claimName: mongo-pvc
ΠΡΠΈΠΌΠ΅Π½ΠΈΠΌ: $ kubectl apply -f mongo-deployment.yml -n dev
ΠΠΎΠ½ΡΠΈΡΡΠ΅ΠΌ Π²ΡΠ΄Π΅Π»Π΅Π½Π½ΠΎΠ΅ ΠΏΠΎ PVC Ρ
ΡΠ°Π½ΠΈΠ»ΠΈΡΠ΅ ΠΊ PODβΡ mongo:
- ΠΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΎΠ΅ Π²ΡΠ΄Π΅Π»Π΅Π½ΠΈΠ΅ Volume'ΠΎΠ²
Π‘ΠΎΠ·Π΄Π°Π² PersistentVolume ΠΌΡ ΠΎΡΠ΄Π΅Π»ΠΈΠ»ΠΈ ΠΎΠ±ΡΠ΅ΠΊΡ "Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ°" ΠΎΡ Π½Π°ΡΠΈΡ Service'ΠΎΠ² ΠΈ Pod'ΠΎΠ². Π’Π΅ΠΏΠ΅ΡΡ ΠΌΡ ΠΌΠΎΠΆΠ΅ΠΌ Π΅Π³ΠΎ ΠΏΡΠΈ Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎΡΡΠΈ ΠΏΠ΅ΡΠ΅ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ.
ΠΠΎ Π½Π°ΠΌ Π³ΠΎΡΠ°Π·Π΄ΠΎ ΠΈΠ½ΡΠ΅ΡΠ΅ΡΠ½Π΅Π΅ ΡΠΎΠ·Π΄Π°Π²Π°ΡΡ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ° ΠΏΡΠΈ Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎΡΡΠΈ ΠΈ Π² Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΎΠΌ ΡΠ΅ΠΆΠΈΠΌΠ΅. Π ΡΡΠΎΠΌ Π½Π°ΠΌ ΠΏΠΎΠΌΠΎΠ³ΡΡ StorageClassβΡ. ΠΠ½ΠΈ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡ Π³Π΄Π΅ (ΠΊΠ°ΠΊΠΎΠΉ ΠΏΡΠΎΠ²Π°ΠΉΠ΄Π΅Ρ) ΠΈ ΠΊΠ°ΠΊΠΈΠ΅ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ° ΡΠΎΠ·Π΄Π°ΡΡΡΡ.
CΠΎΠ·Π΄Π°Π΄ΠΈΠΌ StorageClass Fast ΡΠ°ΠΊ, ΡΡΠΎΠ±Ρ ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π»ΠΈΡΡ SSD-Π΄ΠΈΡΠΊΠΈ Π΄Π»Ρ ΡΠ°Π±ΠΎΡΡ Π½Π°ΡΠ΅Π³ΠΎ Ρ ΡΠ°Π½ΠΈΠ»ΠΈΡΠ°:
storage-fast.yml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast # ΠΠΌΡ StorageClass'Π°
provisioner: kubernetes.io/gce-pd # ΠΡΠΎΠ²Π°ΠΉΠ΄Π΅Ρ Ρ
ΡΠ°Π½ΠΈΠ»ΠΈΡΠ°
parameters:
type: pd-ssd # Π’ΠΈΠΏ ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Π»ΡΠ΅ΠΌΠΎΠ³ΠΎ Ρ
ΡΠ°Π½ΠΈΠ»ΠΈΡΠ°
ΠΠΎΠ±Π°Π²ΠΈΠΌ StorageClass Π² ΠΊΠ»Π°ΡΡΠ΅Ρ: $ kubectl apply -f storage-fast.yml -n dev
- PVC + StorageClass
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ PersistentVolumeClaim:
mongo-claim-dynamic.yml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-pvc-dynamic
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast # ΠΠΌΠ΅ΡΡΠΎ ΡΡΡΠ»ΠΊΠΈ Π½Π° ΡΠΎΠ·Π΄Π°Π½Π½ΡΠΉ Π΄ΠΈΡΠΊ, ΡΠ΅ΠΏΠ΅ΡΡ
resources: # ΠΌΡ ΡΡΡΠ»Π°Π΅ΠΌΡΡ Π½Π° StorageClass
requests:
storage: 10Gi
ΠΠΎΠ±Π°Π²ΠΈΠΌ StorageClass Π² ΠΊΠ»Π°ΡΡΠ΅Ρ:
$ kubectl apply -f mongo-claim-dynamic.yml -n dev
ΠΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ PVC:
mongo-deployment.yml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mongo
...
spec:
containers:
- image: mongo:3.2
name: mongo
volumeMounts:
- name: mongo-gce-pd-storage
mountPath: /data/db
volumes:
- name: mongo-gce-pd-storage
persistentVolumeClaim:
claimName: mongo-pvc-dynamic #ΠΠ±Π½ΠΎΠ²ΠΈΠΌ PersistentVolumeClaim
ΠΠ±Π½ΠΎΠ²ΠΈΠΌ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ Π½Π°ΡΠ΅Π³ΠΎ Deployment'Π°: $ kubectl apply -f mongo-deployment.yml -n dev
Π‘ΠΏΠΈΡΠΎΠΊ ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΡ PersistentVolume'ΠΎΠ²:
$ kubectl get persistentvolume -n dev
output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
pvc-4aa55cd3-2256-1..a 15Gi RWO Delete Bound dev/mongo-pvc standard 23m
pvc-dcb0edd0-2258-1..a 10Gi RWO Delete Bound dev/mongo-pvc-dynamic fast 5m
reddit-mongo-disk 25Gi RWO Retain Available 28m
- Status - ΡΡΠ°ΡΡΡ PV ΠΏΠΎ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ ΠΊ Pod'Π°ΠΌ ΠΈ Claim'Π°ΠΌ (Bound - ΡΠ²ΡΠ·Π°Π½Π½ΡΠΉ, Availible - Π΄ΠΎΡΡΡΠΏΠ½ΡΠΉ)
- Claim - ΠΊ ΠΊΠ°ΠΊΠΎΠΌΡ Claim'Ρ ΠΏΡΠΈΠ²ΡΠ·Π°Π½ Π΄Π°Π½Π½ΡΠΉ PV
- StorageClass - StorageClass Π΄Π°Π½Π½ΠΎΠ³ΠΎ PV
- ΠΠ»Ρ Π»ΠΎΠΊΠ°Π»ΡΠ½ΠΎΠΉ ΡΠ°Π·ΡΠ°Π±ΠΎΡΠΊΠΈ Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎ:
- kubectl
- Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΡ ~/.kube
- minikube:
brew cask install minikube
ΠΈΠ»ΠΈ
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.27.0/
minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/
bin/
ΠΠ»Ρ OS X ΠΏΠΎΠ½Π°Π΄ΠΎΠ±ΠΈΡΡΡ Π³ΠΈΠΏΠ΅ΡΠ²ΠΈΠ·ΠΎΡ xhyve driver, VirtualBox, ΠΈΠ»ΠΈ VMware Fusion.
- ΠΠ°ΠΏΡΡΠΊ Minicube-ΠΊΠ»Π°ΡΡΠ΅ΡΠ°:
minikube start
ΠΡΠ»ΠΈ Π½ΡΠΆΠ½Π° ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½Π°Ρ Π²Π΅ΡΡΠΈΡ kubernetes, ΡΠ»Π΅Π΄ΡΠ΅Ρ ΡΠΊΠ°Π·ΡΠ²Π°ΡΡ ΡΠ»Π°Π³
--kubernetes-version <version> (v1.8.0)
ΠΠΎ-ΡΠΌΠΎΠ»ΡΠ°Π½ΠΈΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ VirtualBox. ΠΡΠ»ΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ Π΄ΡΡΠ³ΠΎΠΉ Π³ΠΈΠΏΠ΅ΡΠ²ΠΈΠ·ΠΎΡ, ΡΠΎ Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌ ΡΠ»Π°Π³
--vm-driver=<hypervisor
- Minikube-ΠΊΠ»Π°ΡΡΠ΅Ρ ΡΠ°Π·Π²Π΅ΡΠ½ΡΡ. ΠΠ²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈ Π±ΡΠ» Π½Π°ΡΡΡΠΎΠ΅Π½ ΠΊΠΎΠ½ΡΠΈΠ³ kubectl.
ΠΡΠΎΠ²Π΅ΡΠΈΠΌ: kubectl get nodes
output:
NAME STATUS ROLES AGE VERSION
minikube Ready master 25s v1.13.2
- ΠΠ°Π½ΠΈΡΠ΅ΡΡ kubernetes Π² ΡΠΎΡΠΌΠ°ΡΠ΅ yml:
~/.kube/config
apiVersion: v1
clusters: ## ΡΠΏΠΈΡΠΎΠΊ ΠΊΠ»Π°ΡΡΠ΅ΡΠΎΠ²
- cluster:
certificate-authority: ~/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts: ## ΡΠΏΠΈΡΠΎΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠΎΠ²
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users: ## ΡΠΏΠΈΡΠΎΠΊ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Π΅ΠΉ
- name: minikube
user:
client-certificate: ~/.minikube/client.crt
client-key: ~/.minikube/client.key
- ΠΠ±ΡΡΠ½ΡΠΉ ΠΏΠΎΡΡΠ΄ΠΎΠΊ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ kubectl:
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ cluster'a:
$ kubectl config set-cluster β¦ cluster_name
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Π΄Π°Π½Π½ΡΡ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ (credentials):
$ kubectl config set-credentials β¦ user_name
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ°:
$ kubectl config set-context context_name \
--cluster=cluster_name \
--user=user_name
- ΠΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ°:
$ kubectl config use-context context_name
Π’Π°ΠΊΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ, kubectl ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΡΠ΅ΡΡΡ Π΄Π»Ρ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ ΠΊ ΡΠ°Π·Π½ΡΠΌ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°ΠΌ, ΠΏΠΎΠ΄ ΡΠ°Π·Π½ΡΠΌΠΈ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»ΡΠΌΠΈ.
Π’Π΅ΠΊΡΡΠΈΠΉ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡ: $ kubectl config current-context
output
minikube
Π‘ΠΏΠΈΡΠΎΠΊ Π²ΡΠ΅Ρ
ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠΎΠ²: $ kubectl config get-contexts
- ΠΡΠ½ΠΎΠ²Π½ΡΠ΅ ΠΎΠ±ΡΠ΅ΠΊΡΡ - ΡΡΠΎ ΡΠ΅ΡΡΡΡΡ Deployment
ΠΡΠ½ΠΎΠ²Π½ΡΠ΅ Π·Π°Π΄Π°ΡΠΈ Deployment:
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ReplicationSet (ΡΠ»Π΅Π΄ΠΈΡ, ΡΡΠΎΠ±Ρ ΡΠΈΡΠ»ΠΎ Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ Pod-ΠΎΠ² ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΠΎΠ²Π°Π»ΠΎ ΠΎΠΏΠΈΡΠ°Π½Π½ΠΎΠΌΡ)
- ΠΠ΅Π΄Π΅Π½ΠΈΠ΅ ΠΈΡΡΠΎΡΠΈΠΈ Π²Π΅ΡΡΠΈΠΉ Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ Pod-ΠΎΠ² (Π΄Π»Ρ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ ΡΡΡΠ°ΡΠ΅Π³ΠΈΠΉ Π΄Π΅ΠΏΠ»ΠΎΡ, Π΄Π»Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠ΅ΠΉ ΠΎΡΠΊΠ°ΡΠ°)
- ΠΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΠΏΡΠΎΡΠ΅ΡΡΠ° Π΄Π΅ΠΏΠ»ΠΎΡ (ΡΡΡΠ°ΡΠ΅Π³ΠΈΡ, ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡ ΡΡΡΠ°ΡΠ΅Π³ΠΈΠΉ)
- Π€Π°ΠΉΠ» kubernetes/reddit/ui-deployment.yml:
kubernetes/reddit/ui-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: ui
labels:
app: reddit
component: ui
spec:
replicas: 3
selector: ## selector ΠΎΠΏΠΈΡΡΠ²Π°Π΅Ρ, ΠΊΠ°ΠΊ Π΅ΠΌΡ ΠΎΡΡΠ»Π΅ΠΆΠΈΠ²Π°ΡΡ POD-Ρ.
matchLabels: ## Π Π΄Π°Π½Π½ΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ - ΠΊΠΎΠ½ΡΡΠΎΠ»Π»Π΅Ρ Π±ΡΠ΄Π΅Ρ ΡΡΠΈΡΠ°ΡΡ POD-Ρ Ρ ΠΌΠ΅ΡΠΊΠ°ΠΌΠΈ:
app: reddit ## app=reddit ΠΈ component=ui
component: ui
template:
metadata:
name: ui-pod
labels: ## ΠΠΎΡΡΠΎΠΌΡ Π²Π°ΠΆΠ½ΠΎ Π² ΠΎΠΏΠΈΡΠ°Π½ΠΈΠΈ POD-Π° Π·Π°Π΄Π°ΡΡ
app: reddit ## Π½ΡΠΆΠ½ΡΠ΅ ΠΌΠ΅ΡΠΊΠΈ (labels)
component: ui
spec:
containers:
- image: ozyab/ui
name: ui
- ΠΠ°ΠΏΡΡΠΊ Π² Minikube ui-ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ:
$ kubectl apply -f ui-deployment.yml
output
deployment "ui" created
ΠΡΠΎΠ²Π΅ΡΠΊΠ° Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ deployment'ΠΎΠ²:
$ kubectl get deployment
output
NAME READY UP-TO-DATE AVAILABLE AGE
ui 3/3 3 3 2m27s
kubectl apply -f <filename>
ΠΌΠΎΠΆΠ΅Ρ ΠΏΡΠΈΠ½ΠΈΠΌΠ°ΡΡ Π½Π΅ ΡΠΎΠ»ΡΠΊΠΎ ΠΎΡΠ΄Π΅Π»ΡΠ½ΡΠΉ ΡΠ°ΠΉΠ», Π½ΠΎ ΠΈ ΠΏΠ°ΠΏΠΊΡ Ρ Π½ΠΈΠΌΠΈ. ΠΠ°ΠΏΡΠΈΠΌΠ΅Ρ:
$ kubectl apply -f ./kubernetes/reddit
- ΠΡΠΏΠΎΠ»ΡΠ·ΡΡ selector, Π½Π°ΠΉΠ΄Π΅ΠΌ POD-Ρ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ:
$ kubectl get pods --selector component=ui
output
NAME READY STATUS RESTARTS AGE
ui-84994b4554-5m4cb 1/1 Running 0 3m20s
ui-84994b4554-7gnqf 1/1 Running 0 3m25s
ui-84994b4554-zhfr6 1/1 Running 0 4m48s
ΠΡΠΎΠ±ΡΠΎΡ ΠΏΠΎΡΡΠ° Π½Π° pod:
$ kubectl port-forward <pod-name> 8080:9292
output
Forwarding from 127.0.0.1:8080 -> 9292
Forwarding from [::1]:8080 -> 9292
ΠΠΎΡΠ»Π΅ ΡΡΠΎΠ³ΠΎ ΠΌΠΎΠΆΠ½ΠΎ ΠΏΠ΅ΡΠ΅ΠΉΡΠΈ ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ http://127.0.0.1:8080
- Π€Π°ΠΉΠ» kubernetes/reddit/comment-deployment.yml:
kubernetes/reddit/comment-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: comment
labels:
app: reddit
component: comment
spec:
replicas: 3
selector:
matchLabels:
app: reddit
component: comment
template:
metadata:
name: comment
labels:
app: reddit
component: comment
spec:
containers:
- image: ozyab/comment # ΠΠ΅Π½ΡΠ΅ΡΡΡ ΡΠΎΠ»ΡΠΊΠΎ ΠΈΠΌΡ ΠΎΠ±ΡΠ°Π·Π°
name: comment
- ΠΠ°ΠΏΡΠΎΡ ΡΠΎΠ·Π΄Π°Π½Π½ΡΡ ΠΏΠΎΠ΄ΠΎΠ²:
$ kubectl get pods --selector component=comment
ΠΡΠΏΠΎΠ»Π½ΠΈΠ² ΠΏΡΠΎΠ±ΡΠΎΡ ΠΏΠΎΡΡΠ° Π² pod, ΠΈ ΠΏΠ΅ΡΠ΅ΠΉΠ΄Ρ ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ http://127.0.0.1:8080/healthcheck ΡΠ²ΠΈΠ΄ΠΈΠΌ:
- Π€Π°ΠΉΠ» kubernetes/reddit/post-deployment.yml:
kubernetes/reddit/post-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: post
labels:
app: reddit
component: post
spec:
replicas: 3
selector:
matchLabels:
app: reddit
component: post
template:
metadata:
name: post-pod
labels:
app: reddit
component: post
spec:
containers:
- image: ozyab/post
name: post
ΠΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ deployment: kubectl apply -f post-deployment.yml
- Π€Π°ΠΉΠ» kubernetes/reddit/mongo-deployment.yml:
kubernetes/reddit/mongo-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mongo
labels:
app: reddit
component: mongo
spec:
replicas: 1
selector:
matchLabels:
app: reddit
component: mongo
template:
metadata:
name: mongo
labels:
app: reddit
component: mongo
spec:
containers:
- image: mongo:3.2
name: mongo
volumeMounts: #ΡΠΎΡΠΊΠ° ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π² ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ΅ (Π½Π΅ Π² POD-Π΅)
- name: mongo-persistent-storage
mountPath: /data/db
volumes: #ΠΡΡΠΎΡΠΈΠΈΡΠΎΠ²Π°Π½Π½ΡΠ΅ Ρ POD-ΠΎΠΌ Volume-Ρ
- name: mongo-persistent-storage
emptyDir: {}
- ΠΠ»Ρ ΡΠ²ΡΠ·ΠΈ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ ΠΌΠ΅ΠΆΠ΄Ρ ΡΠΎΠ±ΠΎΠΉ ΠΈ Ρ Π²Π½Π΅ΡΠ½ΠΈΠΌ ΠΌΠΈΡΠΎΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΠΎΠ±ΡΠ΅ΠΊΡ Service - Π°Π±ΡΡΡΠ°ΠΊΡΠΈΡ, ΠΊΠΎΡΠΎΡΠ°Ρ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅Ρ Π½Π°Π±ΠΎΡ POD-ΠΎΠ² (Endpoints) ΠΈ ΡΠΏΠΎΡΠΎΠ± Π΄ΠΎΡΡΡΠΏΠ° ΠΊ Π½ΠΈΠΌ.
ΠΠ»Ρ ΡΠ²ΡΠ·ΠΈ ui Ρ post ΠΈ comment Π½ΡΠΆΠ½ΠΎ ΡΠΎΠ·Π΄Π°ΡΡ ΠΈΠΌ ΠΏΠΎ ΠΎΠ±ΡΠ΅ΠΊΡΡ Service.
Π€Π°ΠΉΠ» kubernetes/reddit/comment-service.yml:
kubernetes/reddit/comment-service.yml
---
apiVersion: v1
kind: Service
metadata:
name: comment # Π² DNS ΠΏΠΎΡΠ²ΠΈΡΡΡ Π·Π°ΠΏΠΈΡΡ Π΄Π»Ρ comment
labels:
app: reddit
component: comment
spec:
ports: # ΠΡΠΈ ΠΎΠ±ΡΠ°ΡΠ΅Π½ΠΈΠΈ Π½Π° Π°Π΄ΡΠ΅Ρ post:9292 ΠΈΠ·Π½ΡΡΡΠΈ Π»ΡΠ±ΠΎΠ³ΠΎ ΠΈΠ· POD-ΠΎΠ²
- port: 9292 # ΡΠ΅ΠΊΡΡΠ΅Π³ΠΎ namespace Π½Π°Ρ ΠΏΠ΅ΡΠ΅ΠΏΡΠ°Π²ΠΈΡ Π½Π° 9292-Π½ΡΠΉ
protocol: TCP # ΠΏΠΎΡΡ ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΈΠ· POD-ΠΎΠ² ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ post,
targetPort: 9292 # Π²ΡΠ±ΡΠ°Π½Π½ΡΡ
ΠΏΠΎ label-Π°ΠΌ
selector:
app: reddit
component: comment
ΠΠΎΡΠ»Π΅ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ comment-service.yml
Π½Π°ΠΉΠ΄Π΅ΠΌ ΠΏΠΎ label'Π°ΠΌ ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΡΡΠΈΠ΅ PODΡ:
$ kubectl describe service comment | grep Endpoints
output
Endpoints: 172.17.0.4:9292,172.17.0.6:9292,172.17.0.9:9292
ΠΡΠΏΠΎΠ»Π½ΠΈΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ nslookup comment
ΠΈΠ· ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ° post:
$ kubectl get pods --selector component=post
NAME READY STATUS RESTARTS AGE
post-5c45f6d5c8-5dpx7 1/1 Running 0 17m
post-5c45f6d5c8-cb8fv 1/1 Running 0 17m
post-5c45f6d5c8-k9s5h 1/1 Running 0 17m
$ kubectl exec -ti post-5c45f6d5c8-5dpx7 nslookup comment
nslookup: can't resolve '(null)': Name does not resolve
Name: comment
Address 1: 10.105.95.41 comment.default.svc.cluster.local
ΠΠΈΠ΄ΠΈΠΌ, ΡΡΠΎ ΠΏΠΎΠ»ΡΡΠ΅Π½ ΠΎΡΠ²Π΅Ρ ΠΎΡ DNS.
- ΠΠ½Π°Π»ΠΎΠ³ΠΈΡΠ½ΡΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ ΡΠ°Π·Π²Π΅ΡΠ½Π΅ΠΌ service Π΄Π»Ρ post:
kubernetes/reddit/post-service.yml
---
apiVersion: v1
kind: Service
metadata:
name: post
labels:
app: reddit
component: post
spec:
ports:
- port: 9292
protocol: TCP
targetPort: 9292
selector:
app: reddit
component: post
- Post ΠΈ Comment ΡΠ°ΠΊΠΆΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡ mongodb, ΡΠ»Π΅Π΄ΠΎΠ²Π°ΡΠ΅Π»ΡΠ½ΠΎ Π΅ΠΉ ΡΠΎΠΆΠ΅ Π½ΡΠΆΠ΅Π½ ΠΎΠ±ΡΠ΅ΠΊΡ Service:
kubernetes/reddit/mongodb-service.yml
---
apiVersion: v1
kind: Service
metadata:
name: mongodb
labels:
app: reddit
component: mongo
spec:
ports:
- port: 27017
protocol: TCP
targetPort: 27017
selector:
app: reddit
component: mongo
ΠΠ΅ΠΏΠ»ΠΎΠΉ:
kubectl apply -f mongodb-service.yml
- ΠΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΈΡΠ΅Ρ Π°Π΄ΡΠ΅Ρ comment_db, Π° Π½Π΅ mongodb. ΠΠ½Π°Π»ΠΎΠ³ΠΈΡΠ½ΠΎ ΠΈ ΡΠ΅ΡΠ²ΠΈΡ comment ΠΈΡΠ΅Ρ post_db.
ΠΡΠΈ Π°Π΄ΡΠ΅ΡΠ° Π·Π°Π΄Π°Π½Ρ Π² ΠΈΡ Dockerfile-Π°Ρ Π² Π²ΠΈΠ΄Π΅ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ:
post/Dockerfile
β¦
ENV POST_DATABASE_HOST=post_db
comment/Dockerfile
β¦
ENV COMMENT_DATABASE_HOST=comment_db
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΡΠ΅ΡΠ²ΠΈΡ:
comment-mongodb-service.yml
---
apiVersion: v1
kind: Service
metadata:
name: comment-db
labels:
app: reddit
component: mongo
comment-db: "true" # ΠΌΠ΅ΡΠΊΠ°, ΡΡΠΎΠ±Ρ ΡΠ°Π·Π»ΠΈΡΠ°ΡΡ ΡΠ΅ΡΠ²ΠΈΡΡ
spec:
ports:
- port: 27017
protocol: TCP
targetPort: 27017
selector:
app: reddit
component: mongo
comment-db: "true" # ΠΎΡΠ΄Π΅Π»ΡΠ½ΡΠΉ Π»Π΅ΠΉΠ±Π» Π΄Π»Ρ comment-db
Π€Π°ΠΉΠ» mongo-deployment.yml:
kubernetes/reddit/mongo-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mongo
labels:
...
comment-db: "true"
...
template:
metadata:
name: mongo
labels:
...
comment-db: "true"
Π€Π°ΠΉΠ» comment-deployment.yml:
kubernetes/reddit/comment-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: comment
labels:
app: reddit
component: comment
...
spec:
containers:
- image: ozyab/comment
name: comment
env:
- name: COMMENT_DATABASE_HOST
value: comment-db
Π’Π°ΠΊΠΆΠ΅ Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎ ΠΎΠ±Π½ΠΎΠ²ΠΈΡΡ ΡΠ°ΠΉΠ» mongo-deployment.yml, ΡΡΠΎΠ±Ρ Π½ΠΎΠ²ΡΠΉ Service ΡΠΌΠΎΠ³ Π½Π°ΠΉΡΠΈ Π½ΡΠΆΠ½ΡΠΉ POD:
kubernetes/reddit/mongo-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mongo
labels:
..
comment-db: "true"
template:
metadata:
name: mongo
labels:
..
comment-db: "true"
- ΠΠ΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎ ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠΈΡΡ Π΄ΠΎΡΡΡΠΏ ΠΊ ui-ΡΠ΅ΡΠ²ΠΈΡΡ ΡΠ½Π°ΡΡΠΆΠΈ. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ Π½Π°ΠΌ ΠΏΠΎΠ½Π°Π΄ΠΎΠ±ΠΈΡΡΡ Service Π΄Π»Ρ UI-ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ ui-service.yml:
kubernetes/reddit/ui-service.yml
...
spec:
- nodePort: 32092 #ΠΌΠΎΠΆΠ½ΠΎ Π·Π°Π΄Π°ΡΡ ΡΠ²ΠΎΠΉ ΠΏΠΎΡΡ ΠΈΠ· Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° 30000-32767
type: NodePort
Π ΠΎΠΏΠΈΡΠ°Π½ΠΈΠΈ service:
- NodePort - Π΄Π»Ρ Π΄ΠΎΡΡΡΠΏΠ° ΡΠ½Π°ΡΡΠΆΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°
- port - Π΄Π»Ρ Π΄ΠΎΡΡΡΠΏΠ° ΠΊ ΡΠ΅ΡΠ²ΠΈΡΡ ΠΈΠ·Π½ΡΡΡΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°
ΠΠΎΠΌΠ°Π½Π΄Π° minikube service ui
ΠΎΡΠΊΡΠΎΠ΅Ρ Π² Π±ΡΠ°ΡΠ·Π΅ΡΠ΅ ΡΡΡΠ°Π½ΠΈΡΡ ΡΠ΅ΡΠ²ΠΈΡΠ°.
Π‘ΠΏΠΈΡΠΎΠΊ Π²ΡΠ΅Ρ
ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² Ρ URL: minikube services list
Namespace - ΡΡΠΎ "Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ" ΠΊΠ»Π°ΡΡΠ΅Ρ Kubernetes Π²Π½ΡΡΡΠΈ ΡΠ°ΠΌΠΎΠ³ΠΎ Kubernetes. ΠΠ½ΡΡΡΠΈ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΡΠ°ΠΊΠΎΠ³ΠΎ ΠΊΠ»Π°ΡΡΠ΅ΡΠ° Π½Π°Ρ ΠΎΠ΄ΡΡΡΡ ΡΠ²ΠΎΠΈ ΠΎΠ±ΡΠ΅ΠΊΡΡ (POD-Ρ, Service-Ρ, Deployment-Ρ ΠΈ Ρ.Π΄.), ΠΊΡΠΎΠΌΠ΅ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ², ΠΎΠ±ΡΠΈΡ Π½Π° Π²ΡΠ΅ namespace-Ρ (nodes, ClusterRoles, PersistentVolumes).
Π ΡΠ°Π·Π½ΡΡ namespace-Π°Ρ ΠΌΠΎΠ³ΡΡ Π½Π°Ρ ΠΎΠ΄ΠΈΡΡΡ ΠΎΠ±ΡΠ΅ΠΊΡΡ Ρ ΠΎΠ΄ΠΈΠ½Π°ΠΊΠΎΠ²ΡΠΌ ΠΈΠΌΠ΅Π½Π΅ΠΌ, Π½ΠΎ Π² ΡΠ°ΠΌΠΊΠ°Ρ ΠΎΠ΄Π½ΠΎΠ³ΠΎ namespace ΠΈΠΌΠ΅Π½Π° ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² Π΄ΠΎΠ»ΠΆΠ½Ρ Π±ΡΡΡ ΡΠ½ΠΈΠΊΠ°Π»ΡΠ½Ρ.
ΠΡΠΈ ΡΡΠ°ΡΡΠ΅ Kubernetes ΠΊΠ»Π°ΡΡΠ΅Ρ ΡΠΆΠ΅ ΠΈΠΌΠ΅Π΅Ρ 3 namespace:
- default - Π΄Π»Ρ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² Π΄Π»Ρ ΠΊΠΎΡΠΎΡΡΡ Π½Π΅ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ Π΄ΡΡΠ³ΠΎΠΉ Namespace (Π² Π½Π΅ΠΌ ΠΌΡ ΡΠ°Π±ΠΎΡΠ°Π»ΠΈ Π²ΡΠ΅ ΡΡΠΎ Π²ΡΠ΅ΠΌΡ)
- kube-system - Π΄Π»Ρ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² ΡΠΎΠ·Π΄Π°Π½Π½ΡΡ KubernetesβΠΎΠΌ ΠΈ Π΄Π»Ρ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ ΠΈΠΌ
- kube-public - Π΄Π»Ρ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² ΠΊ ΠΊΠΎΡΠΎΡΡΠΌ Π½ΡΠΆΠ΅Π½ Π΄ΠΎΡΡΡΠΏ ΠΈΠ· Π»ΡΠ±ΠΎΠΉ ΡΠΎΡΠΊΠΈ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°
ΠΠ»Ρ ΡΠΎΠ³ΠΎ, ΡΡΠΎΠ±Ρ Π²ΡΠ±ΡΠ°ΡΡ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΠΎΠ΅ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²ΠΎ ΠΈΠΌΠ΅Π½, Π½ΡΠΆΠ½ΠΎ ΡΠΊΠ°Π·Π°ΡΡ ΡΠ»Π°Π³
-n <namespace>
ΠΈΠ»ΠΈ--namespace <namespace>
ΠΏΡΠΈ Π·Π°ΠΏΡΡΠΊΠ΅ kubectl
- ΠΡΠ΄Π΅Π»ΠΈΠΌ ΡΡΠ΅Π΄Ρ Π΄Π»Ρ ΡΠ°Π·ΡΠ°Π±ΠΎΡΠΊΠΈ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ ΠΎΡ Π²ΡΠ΅Π³ΠΎ ΠΎΡΡΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΠΊΠ»Π°ΡΡΠ΅ΡΠ°, Π΄Π»Ρ ΡΠ΅Π³ΠΎ ΡΠΎΠ·Π΄Π°Π΄ΠΈΠΌ ΡΠ²ΠΎΠΉ Namespace
dev
:
dev-namespace.yml:
---
apiVersion: v1
kind: Namespace
metadata:
name: dev
Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ namespace dev: $ kubectl apply -f dev-namespace.yml
- ΠΠΎΠ±Π°Π²ΠΈΠΌ ΠΈΠ½ΡΡ ΠΎΠ± ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠΈ Π²Π½ΡΡΡΡ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ° UI:
kubernetes/reddit/ui-deployment.yml
---
apiVersion: apps/v1beta2
kind: Deployment
...
spec:
containers:
...
env:
- name: ENV #ΠΠ·Π²Π»Π΅ΠΊΠ°Π΅ΠΌ Π·Π½Π°ΡΠ΅Π½ΠΈΡ ΠΈΠ· ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ° Π·Π°ΠΏΡΡΠΊΠ°
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ΠΠΎΡΠ»Π΅ ΡΡΠΎΠ³ΠΎ: $ kubectl apply -f ui-deployment.yml -n dev
ΠΠ΅ΡΠ΅Ρ ΠΎΠ΄ΠΈΠΌ Π½Π° ΡΡΡΠ°Π½ΠΈΡΡ Kubernetes Engine: https://console.cloud.google.com/kubernetes/list?project=${PROJECT_NAME} ΠΈ ΡΠΎΠ·Π΄Π°Π΅ΠΌ ΠΊΠ»Π°ΡΡΠ΅Ρ.
ΠΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ ΠΊΠ»Π°ΡΡΠ΅ΡΠΎΠΌ Π·Π°ΠΏΡΡΠΊΠ°ΡΡΡΡ Π² container engine ΠΈ ΡΠΏΡΠ°Π²Π»ΡΡΡΡΡ Google:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
- etcd Π Π°Π±ΠΎΡΠ°Ρ Π½Π°Π³ΡΡΠ·ΠΊΠ° (ΡΠΎΠ±ΡΡΠ²Π΅Π½Π½ΡΠ΅ POD-Ρ), Π°Π΄Π΄ΠΎΠ½Ρ, ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³, Π»ΠΎΠ³ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ Ρ.Π΄. Π·Π°ΠΏΡΡΠΊΠ°ΡΡΡΡ Π½Π° ΡΠ°Π±ΠΎΡΠΈΡ Π½ΠΎΠ΄Π°Ρ . Π Π°Π±ΠΎΡΠΈΠ΅ Π½ΠΎΠ΄Ρ - ΡΡΠ°Π½Π΄Π°ΡΡΠ½ΡΠ΅ Π½ΠΎΠ΄Ρ Google compute engine. ΠΡ ΠΌΠΎΠΆΠ½ΠΎ ΡΠ²ΠΈΠ΄Π΅ΡΡ Π² ΡΠΏΠΈΡΠΊΠ΅ Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ ΡΠ·Π»ΠΎΠ².
ΠΠΎΠ΄ΠΊΠ»ΡΡΠΈΠΌΡΡ ΠΊ GKE Π΄Π»Ρ Π·Π°ΠΏΡΡΠΊΠ° Π½Π°ΡΠ΅Π³ΠΎ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ ΠΆΠΌΠ΅ΠΌ ΠΠΎΠ΄ΠΊΠ»ΡΡΠΈΡΡΡΡ Π½Π° ΡΡΡΠ°Π½ΠΈΡΠ΅ ΠΊΠ»Π°ΡΡΠ΅ΡΠΎΠ². ΠΡΠ΄Π΅Ρ Π²ΡΠ΄Π°Π½Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project ${PROJECT_NAME}
Π ΡΠ°ΠΉΠ» ~/.kube/config Π±ΡΠ΄ΡΡ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½Ρ user, cluster ΠΈ context Π΄Π»Ρ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ ΠΊ ΠΊΠ»Π°ΡΡΠ΅ΡΡ Π² GKE. Π’Π°ΠΊΠΆΠ΅ ΡΠ΅ΠΊΡΡΠΈΠΉ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡ Π±ΡΠ΄Π΅Ρ Π²ΡΡΡΠ°Π²Π»Π΅Π½ Π΄Π»Ρ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΡ ΠΊ ΡΡΠΎΠΌΡ ΠΊΠ»Π°ΡΡΠ΅ΡΡ.
ΠΡΠΎΠ²Π΅ΡΠΈΠΌ ΡΠ΅ΠΊΡΡΠΈΠΉ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡ: $ kubectl config current-context
output:
gke_keen-${PROJECT_NAME}_us-central1-a_standard-cluster-1
Π‘ΠΎΠ·Π΄Π°Π΄ΠΈΠΌ dev namespace: $ kubectl apply -f ./kubernetes/reddit/dev-namespace.yml
ΠΠ΅ΠΏΠ»ΠΎΠΉ Π²ΡΠ΅Ρ
ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ Π² namespace dev: $ kubectl apply -f ./kubernetes/reddit/ -n dev
ΠΡΠΊΡΡΡΠΈΠ΅ Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π° kubernetes ΠΏΠΎΡΡΠΎΠ² Π² firewall:
firewall-rules create kubernetes-nodeports \
--direction=INGRESS \
--priority=1000 \
--network=default \
--action=ALLOW \
--rules=tcp:30000-32767 \
--source-ranges=0.0.0.0/0
- ΠΠ°ΠΉΠ΄Π΅ΠΌ Π²Π½Π΅ΡΠ½ΠΈΠΉ IP-Π°Π΄ΡΠ΅Ρ Π»ΡΠ±ΠΎΠΉ Π½ΠΎΠ΄Ρ ΠΈΠ· ΠΊΠ»Π°ΡΡΠ΅ΡΠ°:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP
gke-standard-cluster-1-default-pool-2dd96181-qt7q Ready <none> 11h v1.10.9-gke.5 XX.XXX.XX.XXX
gke-standard-cluster-1-default-pool-d3cd4782-7j9s Ready <none> 11h v1.10.9-gke.5 XX.XXX.XX.XXX
ΠΠΎΡΡ ΠΏΡΠ±Π»ΠΈΠΊΠ°ΡΠΈΠΈ ΡΠ΅ΡΠ²ΠΈΡΠ° ui: $ kubectl describe service ui -n dev | grep NodePort
Type: NodePort
NodePort: <unset> 32092/TCP
ΠΠΎΠΆΠ½ΠΎ ΠΏΠ΅ΡΠ΅ΠΉΡΠΈ Π½Π° Π»ΡΠ±ΠΎΠΉ ΠΈΠ· Π²Π½Π΅ΡΠ½ΠΈΡ IP-Π°Π΄ΡΠ΅ΡΠΎΠ² Π΄Π»Ρ ΠΎΡΠΊΡΡΡΠΈΡ ΡΡΡΠ°Π½ΠΈΡΡ http://XX.XXX.XX.XXX:32092
- Creted new Deployment manifests in kubernetes/reddit folder:
comment-deployment.yml
mongo-deployment.yml
post-deployment.yml
ui-deployment.yml
This lab assumes you have access to the Google Cloud Platform. This lab we use MacOS.
- Install the Google Cloud SDK
Follow the Google Cloud SDK documentation to install and configure the gcloud command line utility.
Verify the Google Cloud SDK version is 218.0.0 or higher: gcloud version
- Default Compute Region and Zone
The easiest way to set default compute region:
gcloud init
.
Otherwise set a default compute region: gcloud config set compute/region us-west1
.
Set a default compute zone: gcloud config set compute/zone us-west1-c
.
- Install CFSSL
The cfssl and cfssljson command line utilities will be used to provision a PKI Infrastructure and generate TLS certificates.
Installing cfssl and cfssljson using packet manager brew: brew install cfssl
.
- Verification Installing
cfssl version
- Install kubectl
The kubectl command line utility is used to interact with the Kubernetes API Server.
- Download and install kubectl from the official release binaries:
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
- Verify kubectl version 1.12.0 or higher is installed:
kubectl version --client
- Virtual Private Cloud Network Create the kubernetes-the-hard-way custom VPC network:
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
Create the kubernetes subnet in the kubernetes-the-hard-way VPC network:
gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24
The 10.240.0.0/24 IP address range can host up to 254 compute instances.
- Firewall
Create a firewall rule that allows internal communication across all protocols:
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0
List the firewall rules in the kubernetes-the-hard-way VPC network:
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
output
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False
- Kubernetes Public IP Address
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
Verify the kubernetes-the-hard-way static IP address was created in your default compute region:
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
-
Compute Instances The compute instances in this lab will be provisioned using Ubuntu Server 18.04, which has good support for the containerd container runtime. Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
-
Kubernetes Controllers Create three compute instances which will host the Kubernetes control plane:
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1804-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,controller
done
- Kubernetes Workers Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The pod-cidr instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
The Kubernetes cluster CIDR range is defined by the Controller Manager's --cluster-cidr
flag. In this tutorial the cluster CIDR range will be set to 10.200.0.0/16, which supports 254 subnets.
Create three compute instances which will host the Kubernetes worker nodes:
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1804-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
done
- Verification List the compute instances in your default compute zone:
gcloud compute instances list
output
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 europe-west4-a n1-standard-1 10.240.0.10 X.X.X.X RUNNING
controller-1 europe-west4-a n1-standard-1 10.240.0.11 X.X.X.X RUNNING
controller-2 europe-west4-a n1-standard-1 10.240.0.12 X.X.X.X RUNNING
worker-0 europe-west4-a n1-standard-1 10.240.0.20 X.X.X.X RUNNING
worker-1 europe-west4-a n1-standard-1 10.240.0.21 X.X.X.X RUNNING
worker-2 europe-west4-a n1-standard-1 10.240.0.22 X.X.X.X RUNNING
- Configuring SSH Access SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the connecting to instances documentation.
Test SSH access to the controller-0 compute instances:
gcloud compute ssh controller-0
If this is your first time connecting to a compute instance SSH keys will be generated for you.
- Certificate Authority
Generate the CA configuration file, certificate, and private key:
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
- Client and Server Certificates
Generate the admin client certificate and private key:
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
- The Kubelet Client Certificates
Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets.
Generate a certificate and private key for each Kubernetes worker node:
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
- The Controller Manager Client Certificate
Generate the kube-controller-manager client certificate and private key:
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
- The Kube Proxy Client Certificate
Generate the kube-proxy client certificate and private key:
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
- The Scheduler Client Certificate
Generate the kube-scheduler client certificate and private key:
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
- The Kubernetes API Server Certificate
The kubernetes-the-hard-way static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Generate the Kubernetes API Server certificate and private key:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
- The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the managing service accounts documentation.
Generate the service-account certificate and private key:
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
- Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance:
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
Copy the appropriate certificates and private keys to each controller instance:
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/
done
In this lab you will generate Kubernetes configuration files, also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
In this section you will generate kubeconfig files for the controller manager, kubelet, kube-proxy, and scheduler clients and the admin user.
- Kubernetes Public IP Address Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the kubernetes-the-hard-way static IP address:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
- The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.
Generate a kubeconfig file for each worker node:
for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
- The kube-proxy Kubernetes Configuration File
Generate a kubeconfig file for the kube-proxy service:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
- The kube-controller-manager Kubernetes Configuration File
Generate a kubeconfig file for the kube-controller-manager service:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
- The kube-scheduler Kubernetes Configuration File
Generate a kubeconfig file for the kube-scheduler service:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
- The admin Kubernetes Configuration File
Generate a kubeconfig file for the admin user:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
- Distribute the Kubernetes Configuration Files Copy the appropriate kubelet and kube-proxy kubeconfig files to each worker instance:
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
Copy the appropriate kube-controller-manager and kube-scheduler kubeconfig files to each controller instance:
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.
In this lab you will generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.
- The Encryption Key
Generate an encryption key:
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
- The Encryption Config File Create the encryption-config.yaml encryption config file:
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
Copy the encryption-config.yaml encryption config file to each controller instance:
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done
Kubernetes components are stateless and store cluster state in etcd. In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
- Prerequisites
The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example: gcloud compute ssh controller-0
- Bootstrapping an etcd Cluster Member
Download and Install the etcd Binaries from the coreos/etcd GitHub project:
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
Extract and install the etcd server and the etcdctl command line utility:
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
- Configure the etcd Server
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
ETCD_NAME=$(hostname -s)
Create the etcd.service systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- Start the etcd Server
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
- Verification
List the etcd cluster members:
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
output:
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example: gcloud compute ssh controller-0
- Provision the Kubernetes Control Plane
Create the Kubernetes configuration directory:
sudo mkdir -p /etc/kubernetes/config
- Download and Install the Kubernetes Controller Binaries
Download the official Kubernetes release binaries:
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
- Install the Kubernetes binaries:
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
- Configure the Kubernetes API Server
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
Create the kube-apiserver.service systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- Configure the Kubernetes Controller Manager
Move the kube-controller-manager kubeconfig into place:
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
Create the kube-controller-manager.service systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- Configure the Kubernetes Scheduler
Move the kube-scheduler kubeconfig into place:
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
Create the kube-scheduler.yaml configuration file:
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
Create the kube-scheduler.service systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- Start the Controller Services
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
- Enable HTTP Health Checks A Google Network Load Balancer will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port 80 and proxy the connections to the API server on https://127.0.0.1:6443/healthz.
Install a basic web server to handle HTTP health checks:
sudo apt-get install -y nginx
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
sudo systemctl restart nginx
sudo systemctl enable nginx
- Verification
kubectl get componentstatuses --kubeconfig admin.kubeconfig
output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
Test the nginx HTTP health check proxy:
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
output:
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 20 Jan 2019 19:54:16 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
- RBAC for Kubelet Authorization
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
The Kubernetes API Server authenticates to the Kubelet as the kubernetes user using the client certificate as defined by the --kubelet-client-certificate flag.
Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
- The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way static IP address will be attached to the resulting load balancer.
Provision a Network Load Balancer. Create the external load balancer network resources:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
- Verification Make a HTTP request for the Kubernetes version info:
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
output:
{
"major": "1",
"minor": "12",
"gitVersion": "v1.12.0",
"gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
"gitTreeState": "clean",
"buildDate": "2018-09-27T16:55:41Z",
"goVersion": "go1.10.4",
"compiler": "gc",
"platform": "linux/amd64"
}
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, gVisor, container networking plugins, containerd, kubelet, and kube-proxy.
- Provisioning a Kubernetes Worker Node
Install the OS dependencies:
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
Download and Install Worker Binaries
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet
Create the installation directories:
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
Install the worker binaries:
sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
sudo mv runc.amd64 runc
chmod +x kubectl kube-proxy kubelet runc runsc
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /
- Configure CNI Networking
Retrieve the Pod CIDR range for the current compute instance:
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
Create the bridge network configuration file:
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
Create the loopback network configuration file:
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"type": "loopback"
}
EOF
Create the containerd configuration file:
sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
[plugins.cri.containerd.gvisor]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
EOF
Create the containerd.service systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
- Configure the Kubelet
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
Create the kubelet-config.yaml configuration file:
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
Create the kubelet.service systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- Configure the Kubernetes Proxy
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
Create the kube-proxy-config.yaml configuration file:
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
Create the kube-proxy.service systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
- Start the Worker Services
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
- Verification
List the registered Kubernetes nodes:
gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
output
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 10m v1.12.0
worker-1 Ready <none> 11m v1.12.0
worker-2 Ready <none> 10m v1.12.0
In this lab you will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Generate a kubeconfig file suitable for authenticating as the admin user:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
- Verification
Check the health of the remote Kubernetes cluster:
kubectl get componentstatuses
output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
List the nodes in the remote Kubernetes cluster:
kubectl get nodes
output:
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 14m v1.12.0
worker-1 Ready <none> 14m v1.12.0
worker-2 Ready <none> 14m v1.12.0
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes.
In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
- The Routing Table
Print the internal IP address and Pod CIDR range for each worker instance:
for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done
output:
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
- Routes
Create network routes for each worker instance:
for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
done
List the routes in the kubernetes-the-hard-way VPC network:
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
output
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-4efe3fc4aab42a71 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
default-route-b8c3b87a29570c17 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
In this lab you will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.
- The DNS Cluster Add-on
Deploy the coredns cluster add-on:
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
output
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
List the pods created by the kube-dns deployment:
kubectl get pods -l k8s-app=kube-dns -n kube-system
output
NAME READY STATUS RESTARTS AGE
coredns-699f8ddd77-cpr8j 1/1 Running 0 71s
coredns-699f8ddd77-zcldn 1/1 Running 0 71s
- Verification
Create a busybox deployment:
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
List the pod created by the busybox deployment:
kubectl get pods -l run=busybox
output:
NAME READY STATUS RESTARTS AGE
busybox-bd8fb7cbd-9bnbk 1/1 Running 0 76s
Retrieve the full name of the busybox pod:
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
Execute a DNS lookup for the kubernetes service inside the busybox pod:
kubectl exec -ti $POD_NAME -- nslookup kubernetes
output
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
- Data Encryption In this section you will verify the ability to encrypt secret data at rest.
Create a generic secret:
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:
gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
output
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a d7 29 23 06 e7 16 c4 |:v1:key1:.)#....|
00000050 22 bc 75 c2 a3 21 f3 33 fc 4a c4 7e a5 70 83 30 |".u..!.3.J.~.p.0|
00000060 48 13 fe 22 9a 73 0e fc 8c f3 06 01 eb 46 24 15 |H..".s.......F$.|
00000070 59 c5 02 37 8e eb 26 d9 2f 54 1c cd 21 a4 1f 49 |Y..7..&./T..!..I|
00000080 1a cc 9a a6 27 e2 6c 0c ce 96 da 85 36 21 2a 83 |....'.l.....6!*.|
00000090 cb b3 62 1c d8 c5 18 b0 15 95 48 cf 2c 2f 41 d5 |..b.......H.,/A.|
000000a0 d9 33 10 65 93 4f e3 55 99 3a a2 64 47 83 24 00 |.3.e.O.U.:.dG.$.|
000000b0 96 8b 07 6b 94 f5 62 05 f5 10 12 3f ae 11 97 ca |...k..b....?....|
000000c0 9e f1 e5 54 c3 43 28 fd 36 15 9b 41 c9 19 08 65 |...T.C(.6..A...e|
000000d0 18 27 16 11 44 b6 24 fc 3f 39 2f 9b 36 3d d1 9e |.'..D.$.?9/.6=..|
000000e0 c8 da a5 e4 2d 8a 28 bf 2b 0a |....-.(.+.|
The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.
- Deployments
In this section you will verify the ability to create and manage Deployments.
Create a deployment for the nginx web server:
kubectl run nginx --image=nginx
List the pod created by the nginx deployment:
kubectl get pods -l run=nginx
output
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-9d2ch 1/1 Running 0 22s
- Port Forwarding
In this section you will verify the ability to access applications remotely using port forwarding.
Retrieve the full name of the nginx pod:
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
Forward port 8080 on your local machine to port 80 of the nginx pod:
kubectl port-forward $POD_NAME 8080:80
output
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
In a new terminal make an HTTP request using the forwarding address:
curl --head http://127.0.0.1:8080
output
HTTP/1.1 200 OK
Server: nginx/1.15.8
Date: Sun, 20 Jan 2019 21:48:12 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
Connection: keep-alive
ETag: "5c21fedf-264"
Accept-Ranges: bytes
- Logs
In this section you will verify the ability to retrieve container logs.
Print the nginx pod logs:
kubectl logs $POD_NAME
output
127.0.0.1 - - [20/Jan/2019:21:48:12 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
- Exec In this section you will verify the ability to execute commands in a container.
Print the nginx version by executing the nginx -v command in the nginx container:
kubectl exec -ti $POD_NAME -- nginx -v
output
nginx version: nginx/1.15.8
- Services
In this section you will verify the ability to expose applications using a Service.
Expose the nginx deployment using a NodePort service:
kubectl expose deployment nginx --port 80 --type NodePort
Retrieve the node port assigned to the nginx service:
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
Create a firewall rule that allows remote access to the nginx node port:
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
Retrieve the external IP address of a worker instance:
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
Make an HTTP request using the external IP address and the nginx node port:
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
output
HTTP/1.1 200 OK
Server: nginx/1.15.8
Date: Sun, 20 Jan 2019 21:56:46 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Dec 2018 09:56:47 GMT
Connection: keep-alive
ETag: "5c21fedf-264"
Accept-Ranges: bytes
- Untrusted Workloads
This section will verify the ability to run untrusted workloads using gVisor.
Create the untrusted pod:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: webserver
image: gcr.io/hightowerlabs/helloworld:2.0.0
EOF
- Verification
In this section you will verify the untrusted pod is running under gVisor (runsc) by inspecting the assigned worker node.
Verify the untrusted pod is running:
output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
busybox-bd8fb7cbd-9bnbk 1/1 Running 0 27m 10.200.0.2 worker-0 <none>
nginx-dbddb74b8-9d2ch 1/1 Running 0 15m 10.200.0.3 worker-0 <none>
untrusted 1/1 Running 0 67s 10.200.1.3 worker-1 <none>
Get the node name where the untrusted pod is running:
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
SSH into the worker node:
gcloud compute ssh ${INSTANCE_NAME}
List the containers running under gVisor:
sudo runsc --root /run/containerd/runsc/k8s.io list
output
I0120 22:01:35.289695 18966 x:0] ***************************
I0120 22:01:35.289875 18966 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
I0120 22:01:35.289950 18966 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0120 22:01:35.290018 18966 x:0] PID: 18966
I0120 22:01:35.290088 18966 x:0] UID: 0, GID: 0
I0120 22:01:35.290158 18966 x:0] Configuration:
I0120 22:01:35.290212 18966 x:0] RootDir: /run/containerd/runsc/k8s.io
I0120 22:01:35.290475 18966 x:0] Platform: ptrace
I0120 22:01:35.290627 18966 x:0] FileAccess: exclusive, overlay: false
I0120 22:01:35.290754 18966 x:0] Network: sandbox, logging: false
I0120 22:01:35.290877 18966 x:0] Strace: false, max size: 1024, syscalls: []
I0120 22:01:35.291000 18966 x:0] ***************************
ID PID STATUS BUNDLE CREATED OWNER
353e797b41e3e5bdd183605258b66a153a61a3f7ff0eb8b0e0e7d8b6e4b3bc5c 18521 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/353e797b41e3e5bdd183605258b66a153a61a3f7ff0eb8b0e0e7d8b6e4b3bc5c 0001-01-01T00:00:00Z
5b1e7bcf2a5cb033888650e49c3978cf429b99d97c4be5c7f5ad14e45b3015a9 18441 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/5b1e7bcf2a5cb033888650e49c3978cf429b99d97c4be5c7f5ad14e45b3015a9 0001-01-01T00:00:00Z
I0120 22:01:35.294484 18966 x:0] Exiting with status: 0
Get the ID of the untrusted pod:
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
pods --name untrusted -q)
Get the ID of the webserver container running in the untrusted pod:
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
ps -p ${POD_ID} -q)
Use the gVisor runsc command to display the processes running inside the webserver container:
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
output
I0120 22:04:59.988268 19220 x:0] ***************************
I0120 22:04:59.988443 19220 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 353e797b41e3e5bdd183605258b66a153a61a3f7ff0eb8b0e0e7d8b6e4b3bc5c]
I0120 22:04:59.988521 19220 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0120 22:04:59.988604 19220 x:0] PID: 19220
I0120 22:04:59.988673 19220 x:0] UID: 0, GID: 0
I0120 22:04:59.988736 19220 x:0] Configuration:
I0120 22:04:59.988789 19220 x:0] RootDir: /run/containerd/runsc/k8s.io
I0120 22:04:59.988910 19220 x:0] Platform: ptrace
I0120 22:04:59.989037 19220 x:0] FileAccess: exclusive, overlay: false
I0120 22:04:59.989160 19220 x:0] Network: sandbox, logging: false
I0120 22:04:59.989299 19220 x:0] Strace: false, max size: 1024, syscalls: []
I0120 22:04:59.989431 19220 x:0] ***************************
UID PID PPID C STIME TIME CMD
0 1 0 0 21:58 10ms app
I0120 22:04:59.990890 19220 x:0] Exiting with status: 0
In this lab you will delete the compute resources created during this tutorial.
- Compute Instances
Delete the controller and worker compute instances:
gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2
- Networking
Delete the external load balancer network resources:
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute addresses delete kubernetes-the-hard-way
Delete the kubernetes-the-hard-way
firewall rules:
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
Delete the kubernetes-the-hard-way
network VPC:
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker-machine:
docker-machine create --driver google \
--google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
--google-machine-type n1-standard-1 \
--google-open-port 5601/tcp \
--google-open-port 9292/tcp \
--google-open-port 9411/tcp \
logging
-
ΠΠ΅ΡΠ΅ΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅ Π½Π° ΡΠΎΠ·Π΄Π°Π½Π½ΡΡ docker-machine:
eval $(docker-machine env logging)
-
Π£Π·Π½Π°ΡΡ ip-Π°Π΄ΡΠ΅Ρ:
docker-machine ip logging
-
ΠΠΎΠ²Π°Ρ Π²Π΅ΡΡΠΈΡ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ reddit
-
Π‘Π±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·ΠΎΠ²:
for i in ui post-py comment; do cd src/$i; bash docker_build.sh; cd -; done
ΠΈΠ»ΠΈ:
/src/ui $ bash docker_build.sh && docker push $USER_NAME/ui
/src/post-py $ bash docker_build.sh && docker push $USER_NAME/post
/src/comment $ bash docker_build.sh && docker push $USER_NAME/comment
- ΠΡΠ΄Π΅Π»ΡΠ½ΡΠΉ compose-ΡΠ°ΠΉΠ» Π΄Π»Ρ ΡΠΈΡΡΠ΅ΠΌΡ Π»ΠΎΠ³ΠΈΡΠΎΠ²Π°Π½ΠΈΡ:
docker/docker-compose-logging.yml
version: '3.5'
services:
fluentd:
image: ${USERNAME}/fluentd
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: elasticsearch
expose:
- 9200
ports:
- "9200:9200"
kibana:
image: kibana
ports:
- "5601:5601"
- Fluentd - ΠΈΠ½ΡΡΡΡΠΌΠ΅Π½Ρ Π΄Π»Ρ ΠΎΡΠΏΡΠ°Π²ΠΊΠΈ, Π°Π³ΡΠ΅Π³Π°ΡΠΈΠΈ ΠΈ ΠΏΡΠ΅ΠΎΠ±ΡΠ°Π·ΠΎΠ²Π°Π½ΠΈΡ Π»ΠΎΠ³-ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠΉ:
logging/fluentd/Dockerfile
FROM fluent/fluentd:v0.12
RUN gem install fluent-plugin-elasticsearch --no-rdoc --no-ri --version 1.9.5
RUN gem install fluent-plugin-grok-parser --no-rdoc --no-ri --version 1.0.0
ADD fluent.conf /fluentd/etc
- Π€Π°ΠΉΠ» ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΠΈ fluentd:
logging/fluentd/fluent.conf
<source>
@type forward #ΠΏΠ»Π°Π³ΠΈΠ½ <i>in_forward</i> Π΄Π»Ρ ΠΏΡΠΈΠ΅ΠΌΠ° Π»ΠΎΠ³ΠΎΠ²
port 24224
bind 0.0.0.0
</source>
<match *.**>
@type copy #ΠΏΠ»Π°Π³ΠΈΠ½ copy
<store>
@type elasticsearch #Π΄Π»Ρ ΠΏΠ΅ΡΠ΅Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ Π²ΡΠ΅Ρ
Π²Ρ
ΠΎΠ΄ΡΡΠΈΡ
Π»ΠΎΠ³ΠΎΠ² Π² elasticseacrh
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
</store>
<store>
@type stdout #Π° ΡΠ°ΠΊΠΆΠ΅ Π² stdout
</store>
</match>
-
Π‘Π±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·Π° fluentd:
docker build -t $USER_NAME/fluentd .
-
ΠΡΠΎΡΠΌΠΎΡΡ Π»ΠΎΠ³ΠΎΠ² post ΡΠ΅ΡΠ²ΠΈΡΠ°:
docker-compose logs -f post
-
ΠΡΠ°ΠΉΠ²Π΅Ρ Π΄Π»Ρ Π»ΠΎΠ³ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΄Π»Ρ ΡΠ΅ΡΠ²ΠΈΡΠ° post Π²Π½ΡΡΡΠΈ compose-ΡΠ°ΠΉΠ»Π°:
docker/docker-compose.yml
version: '3.5'
services:
post:
image: ${USER_NAME}/post
environment:
- POST_DATABASE_HOST=post_db
- POST_DATABASE=posts
depends_on:
- post_db
ports:
- "5000:5000"
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: service.post
- ΠΠ°ΠΏΡΡΠΊ ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ ΡΠ΅Π½ΡΡΠ°Π»ΠΈΠ·ΠΎΠ²Π°Π½Π½ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ Π»ΠΎΠ³ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΈ ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΡΡΠΊ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ² ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ:
docker-compose -f docker-compose-logging.yml up -d
docker-compose down
docker-compose up -d
- Kibana Π±ΡΠ΄Π΅Ρ Π΄ΠΎΡΡΡΠΏΠ½Π° ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ http://logging-ip:5061. ΠΠ΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎ ΡΠΎΠ·Π΄Π°ΡΡ ΠΈΠ½Π΄Π΅ΠΊΡ ΠΏΠ°ΡΡΠ΅ΡΠ½ fluentd-*
- ΠΠΎΠ»Π΅ log Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ° elasticsearch ΡΠΎΠ΄Π΅ΡΠΆΠΈΡ Π² ΡΠ΅Π±Π΅ JSON-ΠΎΠ±ΡΠ΅ΠΊΡ. ΠΠ΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎ Π²ΡΠ΄Π΅Π»ΠΈΡΡ ΡΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ Π² ΠΏΠΎΠ»Ρ, ΡΡΠΎΠ±Ρ ΠΈΠΌΠ΅ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΡ ΠΏΠΎ Π½ΠΈΠΌ ΠΏΠΎΠΈΡΠΊ. ΠΡΠΎ Π΄ΠΎΡΡΠΈΠ³Π°Π΅ΡΡΡ Π·Π° ΡΡΠ΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΠ»ΡΡΡΠΎΠ² Π΄Π»Ρ Π²ΡΠ΄Π΅Π»Π΅Π½ΠΈΡ Π½ΡΠΆΠ½ΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ
- ΠΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ ΡΠΈΠ»ΡΡΡΠ° Π΄Π»Ρ ΠΏΠ°ΡΡΠΈΠ½Π³Π° json-Π»ΠΎΠ³ΠΎΠ², ΠΏΡΠΈΡ ΠΎΠ΄ΡΡΠΈΡ ΠΎΡ post-ΡΠ΅ΡΠ²ΠΈΡΠ°, Π² ΠΊΠΎΠ½ΡΠΈΠ³ fluentd:
logging/fluentd/fluent.conf
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<filter service.post>
@type parser
format json
key_name log
</filter>
<match *.**>
@type copy
...
- ΠΠ΅ΡΠ΅ΡΠ±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·Π° ΠΈ ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΡΡΠΊ ΡΠ΅ΡΠ²ΠΈΡΠ° fluentd:
logging/fluentd $ docker build -t $USER_NAME/fluentd .
docker/ $ docker-compose -f docker-compose-logging.yml up -d fluentd
- ΠΠΎ Π°Π½Π°Π»ΠΎΠ³ΠΈΠΈ Ρ post-ΡΠ΅ΡΠ²ΠΈΡΠΎΠΌ Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠΎ Π΄Π»Ρ ui-ΡΠ΅ΡΠ²ΠΈΡΠ° ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΡΡ Π΄ΡΠ°ΠΉΠ²Π΅Ρ Π΄Π»Ρ Π»ΠΎΠ³ΠΈΡΠΎΠ²Π°Π½ΠΈΡ fluentd Π² compose-ΡΠ°ΠΉΠ»Π΅:
docker/docker-compose.yml
...
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: service.post
...
- ΠΠ΅ΡΠ΅Π·Π°ΠΏΡΡΠΊ ui ΡΠ΅ΡΠ²ΠΈΡΠ°:
docker-compose stop ui
docker-compose rm ui
docker-compose up -d
- ΠΠΎΠ³Π΄Π° ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΈΠ»ΠΈ ΡΠ΅ΡΠ²ΠΈΡ Π½Π΅ ΠΏΠΈΡΠ΅Ρ ΡΡΡΡΠΊΡΡΡΠΈΡΠΎΠ²Π°Π½Π½ΡΠ΅ Π»ΠΎΠ³ΠΈ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ ΡΠ΅Π³ΡΠ»ΡΡΠ½ΡΠ΅ Π²ΡΡΠ°ΠΆΠ΅Π½ΠΈΡ Π΄Π»Ρ ΠΈΡ ΠΏΠ°ΡΡΠΈΠ½Π³Π°. ΠΡΠ΄Π΅Π»Π΅Π½ΠΈΠ΅ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ ΠΈΠ· Π»ΠΎΠ³Π° UI-ΡΠ΅ΡΠ²ΠΈΡΠ° Π² ΠΏΠΎΠ»Ρ:
logging/fluentd/fluent.conf
<filter service.ui>
@type parser
format /\[(?<time>[^\]]*)\] (?<level>\S+) (?<user>\S+)[\W]*service=(?<service>\S+)[\W]*event=(?<event>\S+)[\W]*(?:path=(?<path>\S+)[\W]*)?request_id=(?<request_id>\S+)[\W]*(?:remote_addr=(?<remote_addr>\S+)[\W]*)?(?:method= (?<method>\S+)[\W]*)?(?:response_status=(?<response_status>\S+)[\W]*)?(?:message='(?<message>[^\']*)[\W]*)?/
key_name log
</filter>
- ΠΠ»Ρ ΠΎΠ±Π»Π΅Π³ΡΠ΅Π½ΠΈΡ Π·Π°Π΄Π°ΡΠΈ ΠΏΠ°ΡΡΠΈΠ½Π³Π° Π²ΠΌΠ΅ΡΡΠΎ ΡΡΠ°Π½Π΄Π°ΡΡΠ½ΡΡ ΡΠ΅Π³ΡΠ»ΡΡΠΎΠΊ ΠΌΠΎΠΆΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ grok-ΡΠ°Π±Π»ΠΎΠ½Ρ. Grok - ΡΡΠΎ ΠΈΠΌΠ΅Π½ΠΎΠ²Π°Π½Π½ΡΠ΅ ΡΠ°Π±Π»ΠΎΠ½Ρ ΡΠ΅Π³ΡΠ»ΡΡΠ½ΡΡ Π²ΡΡΠ°ΠΆΠ΅Π½ΠΈΠΉ. ΠΠΎΠΆΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π³ΠΎΡΠΎΠ²ΡΠΉ regexp, ΡΠΎΡΠ»Π°Π²ΡΠΈΡΡ Π½Π° Π½Π΅Π³ΠΎ ΠΊΠ°ΠΊ Π½Π° ΡΡΠ½ΠΊΡΠΈΡ:
docker/fluentd/fluent.conf
...
<filter service.ui>
@type parser
format grok
grok_pattern %{RUBY_LOGGER}
key_name log
</filter>
...
- Π§Π°ΡΡΡ Π»ΠΎΠ³ΠΎΠ² Π½ΡΠΆΠ½ΠΎ Π΅ΡΠ΅ ΡΠ°ΡΠΏΠ°ΡΡΠΈΡΡ. ΠΠ»Ρ ΡΡΠΎΠ³ΠΎ ΠΌΠΎΠΆΠ½ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ Grok-ΠΎΠ² ΠΏΠΎ ΠΎΡΠ΅ΡΠ΅Π΄ΠΈ:
docker/fluentd/fluent.conf
<filter service.ui>
@type parser
format grok
grok_pattern service=%{WORD:service} \| event=%{WORD:event} \| request_id=%{GREEDYDATA:request_id} \| message='%{GREEDYDATA:message}'
key_name message
reserve_data true
</filter>
<filter service.ui>
@type parser
format grok
grok_pattern service=%{WORD:service} \| event=%{WORD:event} \| path=%{GREEDYDATA:path} \| request_id=%{GREEDYDATA:request_id} \| remote_addr=%{IP:remote_addr} \| method= %{WORD:method} \| response_status=%{WORD:response_status}
key_name message
reserve_data true
</filter>
-
compose-monitoring.yml
- Π΄Π»Ρ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π° ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ ΠΠ»Ρ Π·Π°ΠΏΡΡΠΊΠ° ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ:docker-compose -f docker-compose-monitoring.yml up -d
-
cAdvisor - Π΄Π»Ρ Π½Π°Π±Π»ΡΠ΄Π΅Π½ΠΈΡ Π·Π° ΡΠΎΡΡΠΎΡΠ½ΠΈΠ΅ΠΌ
Docker
-ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ² (ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅CPU
, ΠΏΠ°ΠΌΡΡΠΈ, ΠΎΠ±ΡΠ΅ΠΌ ΡΠ΅ΡΠ΅Π²ΠΎΠ³ΠΎ ΡΡΠ°ΡΠΈΠΊΠ°) Π‘Π΅ΡΠ²ΠΈΡ ΠΏΠΎΠΌΠ΅ΡΠ΅Π½ Π² ΠΎΠ΄Π½Ρ ΡΠ΅ΡΡ ΡPrometheus
, Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ ΡcAdvisor'Π°
-
Π
Prometheus
Π΄ΠΎΠ±Π°Π²Π»Π΅Π½Π° ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΎ Π½ΠΎΠ²ΠΎΠΌ ΡΠ΅Π²ΡΠΈΡΠ΅:
- job_name: 'cadvisor'
static_configs:
- targets:
- 'cadvisor:8080'
ΠΠΎΡΠ»Π΅ Π²Π½Π΅ΡΠ΅Π½ΠΈΡ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ Π½Π΅ΠΎΠ±Ρ ΠΎΠ΄ΠΈΠΌΠ° ΠΏΠ΅ΡΠ΅ΡΠ±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·Π°:
cd monitoring/prometheus
docker build -t $USER_NAME/prometheus .
- ΠΠ°ΠΏΡΡΠΊ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ²:
docker-compose up -d
docker-compose -f docker-compose-monitoring.yml up -d
-
ΠΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΈΠ·
cAdvisor
Π±ΡΠ΄Π΅Ρ Π΄ΠΎΡΡΡΠΏΠ½Π° ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ http://docker-machine-host-ip:8080 -
ΠΠ°Π½Π½ΡΠ΅ ΡΠ°ΠΊΠΆΠ΅ ΡΠΎΠ±ΠΈΡΠ°ΡΡΡΡ Π²
Prometheus
-
ΠΠ»Ρ Π²ΠΈΠ·ΡΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ Π΄Π°Π½Π½ΡΡ ΡΠ»Π΅Π΄ΡΠ΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ
Graphana
:
services:
...
grafana:
image: grafana/grafana:5.0.0
volumes:
- grafana_data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=secret
depends_on:
- prometheus
ports:
- 3000:3000
volumes:
grafana_data:
-
ΠΠ°ΠΏΡΡΠΊ:
docker-compose -f docker-compose-monitoring.yml up -d grafana
-
Grapahana
Π΄ΠΎΡΡΡΠΏΠ½Π° ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ: http://docker-mahine-host-ip:3000 -
ΠΠ°ΡΡΡΠΎΠΉΠΊΠ° ΠΈΡΡΠΎΡΠ½ΠΈΠΊΠ° Π΄Π°Π½Π½ΡΡ Π²
Graphana
:
Type: Prometheus
URL: http://prometheus:9090
Access: proxy
- ΠΠ»Ρ ΡΠ±ΠΎΡΠ° ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ ΠΎ post ΡΠ΅ΡΠ²ΠΈΡΠ΅ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ Π΄ΠΎΠ±Π°Π²ΠΈΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ Π² ΡΠ°ΠΉΠ»
prometheus.yml
, ΡΡΠΎΠ±ΡPrometheus
Π½Π°ΡΠ°Π» ΡΠΎΠ±ΠΈΡΠ°ΡΡ ΠΌΠ΅ΡΡΠΈΠΊΠΈ ΠΈ Ρ Π½Π΅Π³ΠΎ:
scrape_configs:
...
- job_name: 'post'
static_configs:
- targets:
- 'post:5000'
- ΠΠ΅ΡΠ΅ΡΠ±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·Π°:
export USER_NAME=username
docker build -t $USER_NAME/prometheus .
- ΠΠ΅ΡΠ΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅
Docker
ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π°:
docker-compose -f docker-compose-monitoring.yml down
docker-compose -f docker-compose-monitoring.yml up -d
-
ΠΠ°Π³ΡΡΠΆΠ΅Π½Π½ΡΠ΅ ΡΠ°ΠΉΠ» Π΄Π°ΡΠ±ΠΎΠ°ΡΠ΄ΠΎΠ² ΡΠ°ΡΠΏΠΎΠ»ΠΎΠΆΠ΅Π½Ρ Π² Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΠΈ
monitoring/grafana/dashboards/
-
Alertmanager
- Π΄ΠΎΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»ΡΠ½ΡΠΉ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ Π΄Π»Ρ ΡΠΈΡΡΠ΅ΠΌΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π°Prometheus
-
Π‘Π±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·Π° Π΄Π»Ρ
alertmanager
'Π° - ΡΠ°ΠΉΠ»monitoring/alertmanager/Dockerfile
:
FROM prom/alertmanager:v0.14.0
ADD config.yml /etc/alertmanager/
- Π‘ΠΎΠ΄Π΅ΡΠΆΠΈΠΌΠΎΠ΅ ΡΠ°ΠΉΠ»Π°
config.yml
:
global:
slack_api_url: 'https://hooks.slack.com/services/$token/$token/$token'
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#userchannel'
- Π‘Π΅ΡΠ²ΠΈΡ Π°Π»Π΅ΡΡΠΈΠ½Π³Π° Π²
docker-compose-monitoring.yml
:
services:
...
alertmanager:
image: ${USER_NAME}/alertmanager
command:
- '--config.file=/etc/alertmanager/config.yml'
ports:
- 9093:9093
- Π€Π°ΠΉΠ» Ρ ΡΡΠ»ΠΎΠ²ΠΈΡΠΌΠΈ, ΠΏΡΠΈ ΠΊΠΎΡΠΎΡΡΡ
Π΄ΠΎΠ»ΠΆΠ΅Π½ ΡΡΠ°Π±Π°ΡΡΠ²Π°ΡΡ Π°Π»Π΅ΡΡ ΠΈ ΠΏΠΎΡΡΠ»Π°ΡΡΡΡ
Alertmanager
'Ρ:monitoring/prometheus/alerts.yml
- ΠΡΠΎΡΡΠΎΠΉ Π°Π»Π΅ΡΡ, ΠΊΠΎΡΠΎΡΡΠΉ Π±ΡΠ΄Π΅Ρ ΡΡΠ°Π±Π°ΡΡΠ²Π°ΡΡ Π² ΡΠΈΡΡΠ°ΡΠΈΠΈ, ΠΊΠΎΠ³Π΄Π° ΠΎΠ΄Π½Π° ΠΈΠ· Π½Π°Π±Π»ΡΠ΄Π°Π΅ΠΌΡΡ
ΡΠΈΡΡΠ΅ΠΌ (
endpoint
) Π½Π΅Π΄ΠΎΡΡΡΠΏΠ½Π° Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ (Π² ΡΡΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ ΠΌΠ΅ΡΡΠΈΠΊΠ°up
Ρ Π»Π΅ΠΉΠ±Π»ΠΎΠΌinstance
ΡΠ°Π²Π½ΡΠΌ ΠΈΠΌΠ΅Π½ΠΈ Π΄Π°Π½Π½ΠΎΠ³ΠΎendpoint
'Π° Π±ΡΠ΄Π΅Ρ ΡΠ°Π²Π½Π° Π½ΡΠ»Ρ):
groups:
- name: alert.rules
rules:
- alert: InstanceDown
expr: up == 0
for: 1m
labels:
severity: page
annotations:
description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minute'
summary: 'Instance {{ $labels.instance }} down'
- Π€Π°ΠΉΠ»
alerts.yml
ΡΠ°ΠΊΠΆΠ΅ Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±ΡΡΡ ΡΠΊΠΎΠΏΠΈΡΠΎΠ²Π°Π½ Π² ΡΠ΅ΡΠ²ΠΈΡprometheus
:
ADD alerts.yml /etc/prometheus/
- ΠΠ΅ΡΠ΅ΡΠ±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·Π°
prometheus
:
docker build -t $USER_NAME/prometheus .
- ΠΠ΅ΡΠ΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅
Docker
ΠΈΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π°:
docker-compose down -d
docker-compose -f docker-compose-monitoring.yml down
docker-compose up -d
docker-compose -f docker-compose-monitoring.yml up -d
- ΠΡΡ Π²ΡΠ΅Ρ
ΠΎΠ±ΡΠ°Π·ΠΎΠ² Π²
dockerhub
:
docker login
docker push $USER_NAME/ui
docker push $USER_NAME/comment
docker push $USER_NAME/post
docker push $USER_NAME/prometheus
docker push $USER_NAME/alertmanager
Π‘ΡΡΠ»ΠΊΠ° Π½Π° ΡΠΎΠ±ΡΠ°Π½Π½ΡΠ΅ ΠΎΠ±ΡΠ°Π·Ρ Π½Π° DockerHub
- ΠΡΠ°Π²ΠΈΠ»ΠΎ ΡΠ°Π΅ΡΠ²ΠΎΠ»Π° Π΄Π»Ρ Prometheus ΠΈ Puma:
gcloud compute firewall-rules create prometheus-default --allow tcp:9090
gcloud compute firewall-rules create puma-default --allow tcp:9292
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker-host:
export GOOGLE_PROJECT=_project_id_
docker-machine create --driver google \
--google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
--google-machine-type n1-standard-1 \
--google-zone europe-west1-b \
docker-host
- ΠΠ΅ΡΠ΅ΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅ Π½Π° docker-host:
eval $(docker-machine env docker-host)
- ΠΠ°ΠΏΡΡΠΊ Prometheus ΠΈΠ· Π³ΠΎΡΠΎΠ²ΠΎΠ³ΠΎ ΠΎΠ±ΡΠ°Π·Π° Ρ DockerHub:
docker run --rm -p 9090:9090 -d --name prometheus prom/prometheus
Prometheus Π±ΡΠ΄Π΅Ρ Π·Π°ΠΏΡΡΠ΅Π½ ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ http://docker-host-ip:9090/
Π£Π·Π½Π°ΡΡ docker-host-ip ΠΌΠΎΠΆΠ½ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ: docker-machine ip docker-host
-
ΠΡΡΠ°Π½ΠΎΠ²ΠΊΠ° ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°:
docker stop prometheus
-
Π€Π°ΠΉΠ» monitoring/prometheus/Dockerfile:
FROM prom/prometheus:v2.1.0
ADD prometheus.yml /etc/prometheus/
Π€Π°ΠΉΠ» monitoring/prometheus/prometheus.yml:
---
global:
scrape_interval: '5s' #ΡΠ°ΡΡΠΎΡΠ° ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ
scrape_configs:
- job_name: 'prometheus' #Π΄ΠΆΠΎΠ±Ρ
static_configs:
- targets:
- 'localhost:9090' #Π°Π΄ΡΠ΅ΡΠ° Π΄Π»Ρ ΡΠ±ΠΎΡΠ° ΠΌΠ΅ΡΡΠΈΠΊ
- job_name: 'ui'
static_configs:
- targets:
- 'ui:9292'
- job_name: 'comment'
static_configs:
- targets:
- 'comment:9292'
- Π‘Π±ΠΎΡΠΊΠ° docker-ΠΎΠ±ΡΠ°Π·Π°:
export USER_NAME=username
docker build -t $USER_NAME/prometheus .
- Π‘Π±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·ΠΎΠ² ΠΏΡΠΈ ΠΏΠΎΠΌΠΎΡΠΈ ΡΠΊΡΠΈΠΏΡΠΎΠ² docker_build.sh Π² Π΄ΠΈΡΠ΅ΠΊΡΠΎΡΠΈΠΈ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΡΠ΅ΡΠ²ΠΈΡΠ°:
cd src/ui & ./docker_build.sh
cd src/post-py & ./docker_build.sh
cd src/comment & ./docker_build.sh
- ΠΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ ΡΠ΅ΡΠ²ΠΈΡΠ° Prometheus Π² docker/Dockerfile:
prometheus:
image: ${USERNAME}/prometheus
ports:
- '9090:9090'
volumes:
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention=1d'
networks:
- back_net
- front_net
- ΠΠ°ΠΏΡΡΠΊ ΠΌΠΈΠΊΡΠΎΡΠ΅ΡΠ²ΠΈΡΠΎΠ²:
docker-compose up -d
- Node exporter Π΄Π»Ρ docker-host:
services:
...
node-exporter:
image: prom/node-exporter:v0.15.2
user: root
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points="^/(sys|proc|dev|host|etc)($$|/)"'
Job Π΄Π»Ρ Prometheus (prometheus.yml):
- job_name: 'node'
static_configs:
- targets:
- 'node-exporter:9100'
- ΠΠ΅ΡΠ΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΎΠ±ΡΠ°Π·ΠΎΠ²:
cd /monitoring/prometheus && docker build -t $USER_NAME/prometheus .
ΠΠ΅ΡΠ΅ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΡΠ΅ΡΠ²ΠΈΡΠΎΠ²:
docker-compose down
docker-compose up -d
- Push ΠΎΠ±ΡΠ°Π·ΠΎΠ² Π½Π° DockerHub:
docker login
docker push $USER_NAME/ui:1.0
docker push $USER_NAME/comment:1.0
docker push $USER_NAME/post:1.0
docker push $USER_NAME/prometheus
Π‘ΡΡΠ»ΠΊΠ° Π½Π° ΡΠΎΠ±ΡΠ°Π½Π½ΡΠ΅ ΠΎΠ±ΡΠ°Π·Ρ Π½Π° DockerHub
-
Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Π½ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΡΠΎΠ΅ΠΊΡΠ° example2
-
ΠΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ ΠΏΡΠΎΠ΅ΠΊΡΠ° Π² username_microservices
git checkout -b gitlab-ci-2
git remote add gitlab2 http://vm-ip/homework/example2.git
git push gitlab2 gitlab-ci-2
- Dev-ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠ΅: ΠΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π° ΡΠ°ΠΊΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ, ΡΡΠΎΠ±Ρ job deploy ΡΡΠ°Π» ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΠ΅ΠΌ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ dev, Π½Π° ΠΊΠΎΡΠΎΡΠΎΠ΅ ΡΡΠ»ΠΎΠ²Π½ΠΎ Π±ΡΠ΄Π΅Ρ Π²ΡΠΊΠ°ΡΡΠ²Π°ΡΡΡΡ ΠΊΠ°ΠΆΠ΄ΠΎΠ΅ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ Π² ΠΊΠΎΠ΄Π΅ ΠΏΡΠΎΠ΅ΠΊΡΠ°:
- ΠΠ΅ΡΠ΅ΠΈΠΌΠ΅Π½ΡΠ΅ΠΌ deploy stage Π² review
- deploy_job Π·Π°ΠΌΠ΅Π½ΠΈΠΌ Π½Π° deploy_dev_job
- ΠΠΎΠ±Π°Π²ΠΈΠΌ environment
name: dev
url: http://dev.example.com
Π ΡΠ°Π·Π΄Π΅Π»Π΅ Operations - Environment ΠΏΠΎΡΠ²ΠΈΡΡΡ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠ΅ dev
-
ΠΠ²Π° Π½ΠΎΠ²ΡΡ ΡΡΠ°ΠΏΠ°: stage ΠΈ production. Stage Π±ΡΠ΄Π΅Ρ ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΡ job, ΠΈΠΌΠΈΡΠΈΡΡΡΡΠΈΠΉ Π²ΡΠΊΠ°ΡΠΊΡ Π½Π° staging ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠ΅, production - Π½Π° production ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠ΅. Job Π±ΡΠ΄ΡΡ Π·Π°ΠΏΡΡΠΊΠ°ΡΡΡΡ Ρ ΠΊΠ½ΠΎΠΏΠΊΠΈ
-
ΠΠΈΡΠ΅ΠΊΡΠΈΠ²Π° only ΠΎΠΏΠΈΡΡΠ²Π°Π΅Ρ ΡΠΏΠΈΡΠΎΠΊ ΡΡΠ»ΠΎΠ²ΠΈΠΉ, ΠΊΠΎΡΠΎΡΡΠ΅ Π΄ΠΎΠ»ΠΆΠ½Ρ Π±ΡΡΡ ΠΈΡΡΠΈΠ½Π½Ρ, ΡΡΠΎΠ±Ρ job ΠΌΠΎΠ³ Π·Π°ΠΏΡΡΡΠΈΡΡΡΡ. Π Π΅Π³ΡΠ»ΡΡΠ½ΠΎΠ΅ Π²ΡΡΠ°ΠΆΠ΅Π½ΠΈΠ΅
/^\d+\.\d+\.\d+/
ΠΎΠ·Π½Π°ΡΠ°Π΅Ρ, ΡΡΠΎ Π΄ΠΎΠ»ΠΆΠ΅Π½ ΡΡΠΎΡΡΡ semver ΡΡΠ³ Π² git, Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ, 2.4.10 -
ΠΠΎΠΌΠ΅ΡΠΊΠ° ΡΠ΅ΠΊΡΡΠ΅Π³ΠΎ ΠΊΠΎΠΌΠΌΠΈΡΠ° ΡΡΠ³ΠΎΠΌ:
git tag 2.4.10
- ΠΡΡ Ρ ΡΡΠ³Π°ΠΌΠΈ:
git push gitlab2 gitlab-ci-2 --tags
- ΠΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π²Π°ΠΌ ΠΈΠΌΠ΅ΡΡ Π²ΡΠ΄Π΅Π»Π΅Π½Π½ΡΠΉ ΡΡΠ΅Π½Π΄ Π΄Π»Ρ ΠΊΠ°ΠΆΠ΄ΠΎΠΉ feature-Π²Π΅ΡΠΊΠΈ Π² git. ΠΠΏΡΠ΅Π΄Π΅Π»ΡΡΡΡΡ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ , Π΄ΠΎΡΡΡΠΏΠ½ΡΡ Π² .gitlab-ci.yml. Job ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅Ρ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΎΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΠ΅ Π΄Π»Ρ ΠΊΠ°ΠΆΠ΄ΠΎΠΉ Π²Π΅ΡΠΊΠΈ Π² ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΈ, ΠΊΡΠΎΠΌΠ΅ Π²Π΅ΡΠΊΠΈ master
branch review:
stage: review
script: echo "Deploy to $CI_ENVIRONMENT_SLUG"
environment:
name: branch/$CI_COMMIT_REF_NAME
url: http://$CI_ENVIRONMENT_SLUG.example.com
only:
- branches
except:
- master
- Π£ΡΡΠ°Π½ΠΎΠ²ΠΊΠ° Docker:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce docker-compose
- ΠΠΎΠ΄Π³ΠΎΡΠΎΠ²ΠΊΠ° ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ:
mkdir -p /srv/gitlab/config /srv/gitlab/data /srv/gitlab/logs
cd /srv/gitlab/
touch docker-compose.yml
docker-compose.yml:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://<VM-IP>'
ports:
- '80:80'
- '443:443'
- '2222:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
-
ΠΠ°ΠΏΡΡΠΊ Gitlab CI:
docker-compose up -d
-
GUI GitLab: ΠΡΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅ ΡΠ΅Π³ΠΈΡΡΡΠ°ΡΠΈΠΈ, ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Π³ΡΡΠΏΠΏΡ ΠΏΡΠΎΠ΅ΠΊΡΠΎΠ² homework, ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΏΡΠΎΠ΅ΠΊΡΠ° example
-
ΠΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ remote Π² ΠΏΡΠΎΠ΅ΠΊΡ microservices:
git remote add gitlab http://<ip>/homework/example.git
-
Push Π² ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ:
http://35.204.52.154/homework/example
-
ΠΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΠ΅ CI/CD Pipeline ΠΏΡΠΎΠ΅ΠΊΡΠ° ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΡΡΡ Π² ΡΠ°ΠΉΠ»Π΅ .gitlab-ci.yml:
stages:
- build
- test
- deploy
build_job:
stage: build
script:
- echo 'Building'
test_unit_job:
stage: test
script:
- echo 'Testing 1'
test_integration_job:
stage: test
script:
- echo 'Testing 2'
deploy_job:
stage: deploy
script:
- echo 'Deploy'
- Π£ΡΡΠ°Π½ΠΎΠ²ΠΊΠ° GitLab Runner:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
-
ΠΠ°ΠΏΡΡΠΊ runner'Π°:
docker exec -it gitlab-runner gitlab-runner register
-
ΠΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ ΠΈΡΡ ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΊΠΎΠ΄Π° Π² ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΠΉ:
git clone https://github.com/express42/reddit.git && rm -rf ./reddit/.git
git add reddit/
git commit -m 'Add reddit app'
git push gitlab gitlab-ci-1
- ΠΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ ΠΎΠΏΠΈΡΠ°Π½ΠΈΡ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π° Π² .gitlab-ci.yml:
image: ruby:2.4.2
stages:
...
variables:
DATABASE_URL: 'mongodb://mongo/user_posts'
before_script:
- cd reddit
- bundle install
...
test_unit_job:
stage: test
services:
- mongo:latest
script:
- ruby simpletest.rb
...
- Π ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π΅ Π²ΡΡΠ΅ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ Π²ΡΠ·ΠΎΠ² reddit/simpletest.rb:
require_relative './app'
require 'test/unit'
require 'rack/test'
set :environment, :test
class MyAppTest < Test::Unit::TestCase
include Rack::Test::Methods
def app
Sinatra::Application
end
def test_get_request
get '/'
assert last_response.ok?
end
end
- ΠΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ Π±ΠΈΠ±Π»ΠΈΠΎΡΠ΅ΠΊΠΈ Π΄Π»Ρ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π² reddit/Gemfile:
gem 'rack-test'
-
ΠΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅ ΠΊ
docker-host
:eval $(docker-machine env docker-host)
-
ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°
joffotron/docker-net-tools
Ρ Π½Π°Π±ΠΎΡΠΎΠΌ ΡΠ΅ΡΠ΅Π²ΡΡ ΡΡΠΈΠ»ΠΈΡ:docker run -ti --rm --network none joffotron/docker-net-tools -c ifconfig
ΠΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ none-driver
, Π²ΡΠ²ΠΎΠ΄ ΡΠ°Π±ΠΎΡΡ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°:
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
- ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ° Π² ΡΠ΅ΡΠ΅Π²ΠΎΠΌ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅
docker
-Ρ ΠΎΡΡΠ°:docker run -ti --rm --network host joffotron/docker-net-tools -c ifconfig
ΠΠ°ΠΏΡΡΠΊ ipconfig
Π½Π° docker-host
'Π΅ ΠΏΡΠΈΠ²Π΅Π΄Π΅Ρ ΠΊ Π°Π½Π°Π»ΠΎΠ³ΠΈΡΠ½ΠΎΠΌΡ Π²ΡΠ²ΠΎΠ΄Ρ:
docker-machine ssh docker-host ifconfig
- ΠΠ°ΠΏΡΡΡ nginx Π² ΡΠΎΠ½Π΅ Π² ΡΠ΅ΡΠ΅Π²ΠΎΠΌ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅ docker-host:
docker run --network host -d nginx
ΠΡΠΈ ΠΏΠΎΠ²ΡΠΎΡΠ½ΠΎΠΌ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΠΈ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΠΎΠ»ΡΡΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ:
2018/11/30 19:50:53 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
ΠΠΎ ΠΏΡΠΈΡΠΈΠ½Π΅ ΡΠΎΠ³ΠΎ, ΡΡΠΎ ΠΏΠΎΡΡ 80 ΡΠΆΠ΅ Π·Π°Π½ΡΡ.
- ΠΠ»Ρ ΠΏΡΠΎΡΠΌΠΎΡΡΠ° ΡΡΡΠ΅ΡΡΠ²ΡΡΡΠΈΡ
net-namespaces Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ Π½Π° docker-host:
sudo ln -s /var/run/docker/netns /var/run/netns
ΠΡΠΎΡΠΌΠΎΡΡ:sudo ip netns
- ΠΡΠΈ Π·Π°ΠΏΡΡΠΊΠ΅ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ° Ρ ΡΠ΅ΡΡΡ
host
net-namespace ΠΎΠ΄ΠΈΠ½ - default. - ΠΡΠΈ Π·Π°ΠΏΡΡΠΊΠ΅ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ° Ρ ΡΠ΅ΡΡΡ
none
Π² ΡΠΏΡΠΈΠΊΠ΅ Π΄ΠΎΠ±Π°Π²ΠΈΡΡΡ id net-namespace. ΠΡΠ²ΠΎΠ΄ ΡΠΏΠΈΡΠΊΠ°net-namespac
'ΠΎΠ²:
user@docker-host:~$ sudo ip net
88f8a9be77ca
default
ΠΠΎΠΆΠ½ΠΎ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ Π² Π²ΡΠ±ΡΠ°Π½Π½ΠΎΠΌ net-namespace
:
user@docker-host:~$ sudo ip netns exec 88f8a9be77ca ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
-
Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅
bridge
-ΡΠ΅ΡΠΈ:docker network create reddit --driver bridge
-
ΠΠ°ΠΏΡΡΠΊ ΠΏΡΠΎΠ΅ΠΊΡΠ°
reddit
Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌbridge
-ΡΠ΅ΡΠΈ:
docker run -d --network=reddit mongo:latest
docker run -d --network=reddit ozyab/post:1.0
docker run -d --network=reddit ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:1.0
Π Π΄Π°Π½Π½ΠΎΠΉ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΠΈ web
-ΡΠ΅ΡΠ²ΠΈΡ puma
Π½Π΅ ΡΠΌΠΎΠΆΠ΅Ρ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠΈΡΡΡΡ ΠΊ ΠΠ mongodb
.
Π‘Π΅ΡΠ²ΠΈΡΡ ΡΡΡΠ»Π°ΡΡΡΡ Π΄ΡΡΠ³ Π½Π° Π΄ΡΡΠ³Π° ΠΏΠΎ dns
-ΠΈΠΌΠ΅Π½Π°ΠΌ, ΠΏΡΠΎΠΏΠΈΡΠ°Π½Π½ΡΠΌ Π² ENV
-ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ
Dockerfil
'Π°. ΠΡΡΡΠΎΠ΅Π½Π½ΡΠΉ DNS docker
'Π° Π½ΠΈΡΠ΅Π³ΠΎ Π½Π΅ Π·Π½Π°Π΅Ρ ΠΎΠ± ΡΡΠΈΡ
ΠΈΠΌΠ΅Π½Π°Ρ
.
ΠΡΠΈΡΠ²ΠΎΠ΅Π½ΠΈΠ΅ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°ΠΌ ΠΈΠΌΠ΅Π½ ΠΈΠ»ΠΈ ΡΠ΅ΡΠ΅Π²ΡΡ Π°Π»ΠΈΠ°ΡΠΎΠ² ΠΏΡΠΈ ΡΡΠ°ΡΡΠ΅:
--name <name> (max 1 ΠΈΠΌΡ)
--network-alias <alias-name> (1 ΠΈΠ»ΠΈ Π±ΠΎΠ»Π΅Π΅)
- ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ² Ρ ΡΠ΅ΡΠ΅Π²ΡΠΌΠΈ Π°Π»ΠΈΠ°ΡΠ°ΠΌΠΈ:
docker run -d --network=reddit --network-alias=post_db --network-alias=comment_db mongo:latest
docker run -d --network=reddit --network-alias=post ozyab/post:1.0
docker run -d --network=reddit --network-alias=comment ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:1.0
- ΠΠ°ΠΏΡΡΠΊ ΠΏΡΠΎΠ΅ΠΊΡΠ° Π² 2-Ρ
bridge
ΡΠ΅ΡΡΡ . Π‘Π΅ΡΠ²ΠΈΡui
Π½Π΅ ΠΈΠΌΠ΅Π΅Ρ Π΄ΠΎΡΡΡΠΏΠ° ΠΊ Π±Π°Π·Π΅ Π΄Π°Π½Π½ΡΡ .
Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker
-ΡΠ΅ΡΠ΅ΠΉ:
docker network create back_net --subnet=10.0.2.0/24
docker network create front_net --subnet=10.0.1.0/24
ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ²:
docker run -d --network=front_net -p 9292:9292 --name ui ozyab/ui:1.0
docker run -d --network=back_net --name comment ozyab/comment:1.0
docker run -d --network=back_net --name post ozyab/post:1.0
docker run -d --network=back_net --name mongo_db --network-alias=post_db --network-alias=comment_db mongo:latest
Docker ΠΏΡΠΈ ΠΈΠ½ΠΈΡΠΈΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ° ΠΌΠΎΠΆΠ΅Ρ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠΈΡΡ ΠΊ Π½Π΅ΠΌΡ ΡΠΎΠ»ΡΠΊΠΎ 1 ΡΠ΅ΡΡ, ΠΏΠΎΡΡΠΎΠΌΡ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΡ comment
ΠΈ post
Π½Π΅ Π²ΠΈΠ΄ΡΡ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅Ρ ui
ΠΈΠ· ΡΠΎΡΠ΅Π΄Π½ΠΈΡ
ΡΠ΅ΡΠ΅ΠΉ.
ΠΡΠΆΠ½ΠΎ ΠΏΠΎΠΌΠ΅ΡΡΠΈΡΡ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΡ post ΠΈ comment Π² ΠΎΠ±Π΅ ΡΠ΅ΡΠΈ. ΠΠΎΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»ΡΠ½ΡΠ΅ ΡΠ΅ΡΠΈ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ°ΡΡΡΡ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ: docker network connect <network> <container>
:
docker network connect front_net post
docker network connect front_net comment
Π£ΡΡΠ°Π½ΠΎΠ²ΠΊΠ° ΠΏΠ°ΠΊΠ΅ΡΠ° bridge-utils
:
docker-machine ssh docker-host
sudo apt-get update && sudo apt-get install bridge-utils
ΠΡΠΏΠΎΠ»Π½ΠΈΠ² docker network ls
ΠΌΠΎΠΆΠ½ΠΎ ΡΠ²ΠΈΠ΄Π΅ΡΡ ΡΠΏΠΈΡΠΎΠΊ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΡ
ΡΠ΅ΡΠ΅ΠΉ docker
'Π°.
ifconfig | grep br
ΠΏΠΎΠΊΠ°ΠΆΠ΅Ρ ΡΠΏΠΈΡΠΎΠΊ bridge
-ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡΠΎΠ²:
br-45935d0f2bbf Link encap:Ethernet HWaddr 02:42:6d:5a:8b:7e
br-45bbc0c70de1 Link encap:Ethernet HWaddr 02:42:94:69:ab:35
br-b6342f9c65f2 Link encap:Ethernet HWaddr 02:42:9a:b1:73:d9
ΠΠΎΠΆΠ½ΠΎ ΠΏΡΠΎΡΠΌΠΎΡΡΠ΅ΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΎ ΠΊΠ°ΠΆΠ΄ΠΎΠΌ bridge
-ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡΠ΅ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ brctl show <interface>
:
docker-user@docker-host:~$brctl show br-45935d0f2bbf
bridge name bridge id STP enabled interfaces
br-45935d0f2bbf 8000.02426d5a8b7e no veth05b2946
veth2f50985
vetha882d28
veth-ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡ - ΡΠ°ΡΡΡ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΠΎΠΉ ΠΏΠ°ΡΡ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡΠΎΠ², ΠΊΠΎΡΠΎΡΠ°Ρ Π»Π΅ΠΆΠ°Ρ Π² ΡΠ΅ΡΠ΅Π²ΠΎΠΌ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅ Ρ ΠΎΡΡΠ° ΠΈ ΡΠ°ΠΊΠΆΠ΅ ΠΎΡΠΎΠ±ΡΠ°ΠΆΠ°ΡΡΡΡ Π² ifconfig. ΠΡΠΎΡΡΠ΅ ΡΠ°ΡΡΡ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡΠ° Π½Π°Ρ ΠΎΠ΄ΠΈΡΡΡ Π²Π½ΡΡΡΠΈ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ².
ΠΡΠΎΡΠΌΠΎΡΡ iptables
: sudo iptables -nL -t nat
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.0.1.0/24 0.0.0.0/0
MASQUERADE all -- 10.0.2.0/24 0.0.0.0/0
MASQUERADE tcp -- 10.0.1.2 10.0.1.2 tcp dpt:9292
ΠΠ΅ΡΠ²ΡΠ΅ ΠΏΡΠ°Π²ΠΈΠ»Π° ΠΎΡΠ²Π΅ΡΠ°ΡΡ Π·Π° Π²ΡΠΏΡΡΠΊ ΡΡΠ°ΡΠΈΠΊΠ° ΠΈΠ· ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ² Π½Π°ΡΡΠΆΡ.
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9292 to:10.0.1.2:9292
ΠΠΎΡΠ»Π΅Π΄Π½ΡΡ ΡΡΡΠΎΠΊΠ° ΠΏΡΠΎΠΊΠΈΠ΄ΡΠ²Π°Π΅Ρ ΠΏΠΎΡΡ 9292 Π²Π½ΡΡΡΡ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°.
- Π€Π°ΠΉΠ»Ρ
./src/docker-compose.yml
ΡΡΠ΅Π±ΡΠ΅ΡΡΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½Π°Ρ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡUSERNAME
:export USERNAME=ozyab
ΠΠΎΠΆΠ½ΠΎ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ: docker-compose up -d
- ΠΠ·ΠΌΠ΅Π½ΠΈΡΡ
docker-compose
ΠΏΠΎΠ΄ ΠΊΠ΅ΠΉΡ Ρ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²ΠΎΠΌ ΡΠ΅ΡΠ΅ΠΉ, ΡΠ΅ΡΠ΅Π²ΡΡ Π°Π»ΠΈΠ°ΡΠΎΠ² - Π€Π°ΠΉΠ»
.env
- ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ Π΄Π»Ρdocker-compose.yml
- ΠΠ°Π·ΠΎΠ²ΠΎΠ΅ ΠΈΠΌΡ ΡΠΎΠ·Π΄Π°Π΅ΡΡΡ ΠΏΠΎ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΠ°ΠΏΠΊΠΈ, Π² ΠΊΠΎΡΠΎΡΠΎΠΉ ΠΏΡΠΎΠΈΡΡ
ΠΎΠ΄ΠΈΡ Π·Π°ΠΏΡΡΠΊ
docker-compose
.
ΠΠ»Ρ Π·Π°Π΄Π°Π½ΠΈΡ Π±Π°Π·ΠΎΠ²ΠΎΠ³ΠΎ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΠ΅ΠΊΡΠ° Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ Π΄ΠΎΠ±Π°Π²ΠΈΡΡ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ COMPOSE_PROJECT_NAME=dockermicroservices
Π Π°Π±ΠΎΡΠ° Π² ΠΏΠ°ΠΏΠΊΠ΅ src:
post-py
- ΡΠ΅ΡΠ²ΠΈΡ ΠΎΡΠ²Π΅ΡΠ°ΡΡΠΈΠΉ Π·Π° Π½Π°ΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠΎΠ²comment
- ΡΠ΅ΡΠ²ΠΈΡ ΠΎΡΠ²Π΅ΡΠ°ΡΡΠΈΠΉ Π·Π° Π½Π°ΠΏΠΈΡΠ°Π½ΠΈΠ΅ ΠΊΠΎΠΌΠΌΠ΅Π½ΡΠ°ΡΠΈΠ΅Π²ui
- Π²Π΅Π±-ΠΈΠ½ΡΠ΅ΡΡΠ΅ΠΉΡ, ΡΠ°Π±ΠΎΡΠ°ΡΡΠΈΠΉ Ρ Π΄ΡΡΠ³ΠΈΠΌΠΈ ΡΠ΅ΡΠ²ΠΈΡΠ°ΠΌΠΈ
- Π‘Π±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·ΠΎΠ²:
docker build -t ozyab/post:1.0 ./post-py
docker build -t ozyab/comment:1.0 ./comment
docker build -t ozyab/ui:1.0 ./ui
-
ΠΡΠ΄Π΅Π»ΡΠ½Π°Ρ bridge-ΡΠ΅ΡΡ Π΄Π»Ρ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ², ΡΠ°ΠΊ ΠΊΠ°ΠΊ ΡΠ΅ΡΠ΅Π²ΡΠ΅ Π°Π»ΠΈΠ°ΡΡ Π½Π΅ ΡΠ°Π±ΠΎΡΠ°ΡΡ Π² ΡΠ΅ΡΠΈ ΠΏΠΎ ΡΠΌΠΎΠ»ΡΠ°Π½ΠΈΡ:
docker network create reddit
-
ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ² Π² ΡΡΠΎΠΉ ΡΠ΅ΡΠΈ Ρ ΡΠ΅ΡΠ΅Π²ΡΠΌΠΈ Π°Π»ΠΈΠ°ΡΠ°ΠΌΠΈ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ²:
docker run -d --network=reddit --network-alias=post_db --network-alias=comment_db mongo:latest
docker run -d --network=reddit --network-alias=post ozyab/post:1.0
docker run -d --network=reddit --network-alias=comment ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:1.0
- ΠΡΡΠ°Π½ΠΎΠ²ΠΊΠ° Π²ΡΠ΅Ρ
ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ²:
docker kill $(docker ps -q)
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Docker volume:
docker volume create reddit_db
- ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ² Ρ docker volume:
docker run -d --network=reddit --network-alias=post_db --network-alias=comment_db -v reddit_db:/data/db mongo:latest
docker run -d --network=reddit --network-alias=post ozyab/post:1.0
docker run -d --network=reddit --network-alias=comment ozyab/comment:1.0
docker run -d --network=reddit -p 9292:9292 ozyab/ui:2.0
- ΠΠΎΡΠ»Π΅ ΠΏΠ΅ΡΠ΅Π·Π°ΠΏΡΡΠΊΠ° ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΎΡΡΠ°Π΅ΡΡΡ Π² Π±Π°Π·Π΅
- Π Π°Π±ΠΎΡΠ° Ρ
docker-machine
:
docker-machine create <ΠΈΠΌΡ>
- ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ docker-Ρ
ΠΎΡΡΠ°
eval $(docker-machine env <ΠΈΠΌΡ>)
- ΠΏΠ΅ΡΠ΅ΠΌΠ΅ΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅ Π½Π° docker-Ρ
ΠΎΡΡ
eval $(docker-machine env --unset)
- ΠΏΠ΅ΡΠ΅ΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅ Π½Π° Π»ΠΎΠΊΠ°Π»ΡΠ½ΡΠΉ docker
docker-machine rm <ΠΈΠΌΡ>
- ΡΠ΄Π°Π»Π΅Π½ΠΈΠ΅ docker-Ρ
ΠΎΡΡΠ°
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅
docker-host
:
docker-machine create --driver google \
--google-machine-image https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts \
--google-machine-type n1-standard-1 \
--google-zone europe-west1-b \
docker-host
- ΠΠΎΡΠ»Π΅ ΡΡΠΎΠ³ΠΎ ΡΠ²ΠΈΠ΄Π΅ΡΡ ΡΠΎΠ·Π΄Π°Π½Π½ΡΠΉ
docker-host
ΠΌΠΎΠΆΠ½ΠΎ, Π²ΡΠΏΠΎΠ»Π½ΠΈΠ²:docker-machine ls
- ΠΠ°ΠΏΡΡΡΠΈΠΌ
htop
Π²docker
'Π΅:docker run --rm -ti tehbilly/htop
ΠΡΠ΄Π΅Ρ Π²ΠΈΠ΄Π΅Π½ ΡΠΎΠ»ΡΠΊΠΎ ΠΏΡΠΎΡΠ΅ΡΡ htop
.
ΠΡΠ»ΠΈ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ docker run --rm --pid host -ti tehbilly/htop
, ΡΠΎ Π²ΠΈΠ΄Π½Ρ Π±ΡΠ΄ΡΡ Π²ΡΠ΅ ΠΏΡΠΎΡΠ΅ΡΡΡ Π½Π° Ρ
ΠΎΡΡΠΎΠ²ΠΎΠΉ ΠΌΠ°ΡΠΈΠ½Π΅
- ΠΠΎΠ±Π°Π²Π»Π΅Π½Ρ:
Dockerfile
- ΡΠ΅ΠΊΡΡΠΎΠ²ΠΎΠ΅ ΠΎΠΏΠΈΡΠ°Π½ΠΈΠ΅ Π½Π°ΡΠ΅Π³ΠΎ ΠΎΠ±ΡΠ°Π·Π°
mongod.conf
- ΠΏΠΎΠ΄Π³ΠΎΡΠΎΠ²Π»Π΅Π½Π½ΡΠΉ ΠΊΠΎΠ½ΡΠΈΠ³ Π΄Π»Ρ mongodb
db_config
- ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½Π°Ρ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ ΡΠΎ ΡΡΡΠ»ΠΊΠΎΠΉ Π½Π° mongodb
start.sh
- ΡΠΊΡΠΈΠΏΡ Π·Π°ΠΏΡΡΠΊΠ° ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡ
- Π‘Π±ΠΎΡΠΊΠ° ΠΎΠ±ΡΠ°Π·Π°:
docker build -t reddit:latest .
- ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°:
docker run --name reddit -d --network=host reddit:latest
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΏΡΠ°Π²ΠΈΠ»ΠΎ Π½Π° Π²Ρ ΠΎΠ΄ΡΡΠΈΠΉ ΠΏΠΎΡΡ 9292:
gcloud compute firewall-rules create reddit-app \
--allow tcp:9292 \
--target-tags=docker-machine \
--description="Allow PUMA connections" \
--direction=INGRESS
- ΠΠΎΠΌΠ°Π½Π΄Ρ ΠΏΠΎ ΡΠ°Π±ΠΎΡΠ΅ Ρ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ:
docker tag reddit:latest <login>/otus-reddit:1.0
- Π΄ΠΎΠ±Π°Π²ΠΈΡΡ ΡΡΠ³ ΠΎΠ±ΡΠ°Π·Ρ reddit
docker push <login>/otus-reddit:1.0
- ΠΎΡΠΏΡΠ°Π²ΠΊΠ° ΠΎΠ±ΡΠ°Π·Π° Π² registry
docker logs reddit -f
- ΠΏΡΠΎΡΠΌΠΎΡΡ Π»ΠΎΠ³ΠΎΠ²
docker inspect <login>/otus-reddit:1.0
- ΠΏΡΠΎΡΠΌΠΎΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ ΠΎΠ± ΠΎΠ±ΡΠ°Π·Π΅
docker inspect <login>/otus-reddit:1.0 -f '{{.ContainerConfig.Cmd}}'
- ΠΏΡΠΎΡΠΌΠΎΡΡ ΡΠΎΠ»ΡΠΊΠΎ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ ΠΎ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ΅
docker diff reddit
- ΠΏΡΠΎΡΠΌΠΎΡΡ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ, ΠΏΡΠΎΠΈΠ·ΠΎΡΠ΅Π΄Π½ΠΈΡ
Π² ΡΠ°ΠΉΠ»ΠΎΠ²ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΠ΅ Π·Π°ΠΏΡΡΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°
- ΠΠΎΠ±Π°Π²Π»Π΅Π½ ΡΠ°ΠΉΠ» ΡΠ°Π±Π»ΠΎΠ½Π° PR
.github/PULL_REQUEST_TEMPLATE
- ΠΠ½ΡΠ΅Π³ΡΠ°ΡΠΈΡ ΡΠΎ slack Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΡΡΡ ΠΊΠΎΠΌΠ°Π½Π΄ΠΎΠΉ
/github subscribe Otus-DevOps-2018-09/ozyab09_microservices
- ΠΠ°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°:
docker run hello-world
- Π‘ΠΏΠΈΡΠΎΠΊ Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ
ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ²:
docker ps
- Π‘ΠΏΠΈΡΠΎΠΊ Π²ΡΠ΅Ρ
ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠΎΠ²:
docker ps -a
- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΈ Π·Π°ΠΏΡΡΠΊ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°:
docker run -it ubuntu:16.04 /bin/bash
- ΠΠΎΠΌΠ°Π½Π΄Π°
run
ΡΠΎΠ·Π΄Π°Π΅Ρ ΠΈ Π·Π°ΠΏΡΡΠΊΠ°Π΅Ρ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅Ρ ΠΈΠ·image
start
Π·Π°ΠΏΡΡΠΊΠ°Π΅Ρ ΠΎΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½Π½ΡΠΉ ΡΠΎΠ·Π΄Π°Π½Π½ΡΠΉ ΡΠ°Π½Π΅Π΅ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅Ρattach
ΠΏΠΎΠ΄ΡΠΎΠ΅Π΄ΠΈΠ½ΡΠ΅Ρ ΡΠ΅ΡΠΌΠΈΠ½Π°Π» ΠΊ ΡΠΎΠ·Π΄Π°Π½Π½ΠΎΠΌΡ ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΡdocker system df
ΠΎΡΠΎΠ±ΡΠ°ΠΆΠ°Π΅Ρ ΡΠΊΠΎΠ»ΡΠΊΠΎ Π΄ΠΈΡΠΊΠΎΠ²ΠΎΠ³ΠΎ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π° Π·Π°Π½ΡΡΠΎ ΠΎΠ±ΡΠ°Π·Π°ΠΌΠΈ, ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°ΠΌΠΈ ΠΈvolume
'Π°ΠΌΠΈ- Π‘ΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΎΠ±ΡΠ°Π·Π° ΠΈΠ· ΠΊΠΎΠ½ΡΠ΅ΠΉΠ½Π΅ΡΠ°:
docker commit <container_id> username/ubuntu-tmp-file