From 87be7cefc8df480b2b3c597dc66891b1c6cd88a3 Mon Sep 17 00:00:00 2001 From: rimas Date: Fri, 23 Mar 2018 09:30:49 +0000 Subject: [PATCH] add k8s upgrade doc, bump k8s version, prettify docs --- README.md | 3 ++ docs/DCOS_AGENTS.md | 14 +++---- docs/INSTALL_AWS.md | 54 ++++++++++++------------ docs/INSTALL_AZURE.md | 56 ++++++++++++------------- docs/INSTALL_GCP.md | 52 +++++++++++------------ docs/INSTALL_KUBERNETES.md | 34 ++++++++------- docs/INSTALL_ONPREM.md | 16 +++---- docs/UPGRADE_DCOS.md | 16 +++---- docs/UPGRADE_KUBERNETES.md | 85 ++++++++++++++++++++++++++++++++++++++ 9 files changed, 211 insertions(+), 119 deletions(-) create mode 100644 docs/UPGRADE_KUBERNETES.md diff --git a/README.md b/README.md index 7a9a062..290dc7e 100644 --- a/README.md +++ b/README.md @@ -48,6 +48,9 @@ Upgrade the DC/OS cluster: Change number of DC/OS agents: * [Add/remove DC/OS agents](docs/DCOS_AGENTS.md) +Upgrade the Kubernetes cluster: +* [Upgrade Kubernetes](docs/UPGRADE_KUBERNETES.md) + ## Documentation All documentation for this project is located in the [docs](docs/) directory at the root of this repository. diff --git a/docs/DCOS_AGENTS.md b/docs/DCOS_AGENTS.md index aa4d9e6..7c67fd3 100644 --- a/docs/DCOS_AGENTS.md +++ b/docs/DCOS_AGENTS.md @@ -20,14 +20,14 @@ Edit `./hosts.yaml` and fill in the public IP addresses of your cluster agents s To check that all instances are reachable via Ansible, run the following: -```bash -ansible all -m ping +```shell +$ ansible all -m ping ``` Finally, apply the Ansible playbook: -```bash -ansible-playbook plays/install.yml +```shell +$ ansible-playbook plays/install.yml ``` ## Cloud Providers @@ -41,7 +41,7 @@ num_of_public_agents = "1" Then you can apply the profile with: -```bash -make launch-infra -ansible-playbook -i inventory.py plays/install.yml +```shell +$ make launch-infra +$ ansible-playbook -i inventory.py plays/install.yml ``` diff --git a/docs/INSTALL_AWS.md b/docs/INSTALL_AWS.md index a3d6e5a..8d3b8ea 100644 --- a/docs/INSTALL_AWS.md +++ b/docs/INSTALL_AWS.md @@ -2,17 +2,17 @@ With the following guide, you are able to install a DC/OS cluster on AWS. You need the tools Terraform and Ansible installed. On MacOS, you can use [brew](https://brew.sh/) for that. -``` -brew install terraform -brew install ansible +```shell +$ brew install terraform +$ brew install ansible ``` ## Setup infrastructure ### Pull down the DC/OS Terraform scripts below -```bash -make aws +```shell +$ make aws ``` ### Configure your AWS ssh Keys @@ -21,8 +21,8 @@ In the file `.deploy/desired_cluster_profile` there is a `key_name` variable. Th When you have your key available, you can use ssh-add. -```bash -ssh-add ~/.ssh/path_to_you_key.pem +```shell +$ ssh-add ~/.ssh/path_to_you_key.pem ``` ### Configure your IAM AWS Keys @@ -32,7 +32,7 @@ http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) Here is an example of the output when you're done: -```bash +```shell $ cat ~/.aws/credentials [default] aws_access_key_id = ACHEHS71DG712w7EXAMPLE @@ -45,7 +45,7 @@ The setup variables for Terraform are defined in the file `.deploy/desired_clust For example, you can see the default configuration of your cluster: -```bash +```shell $ cat .deploy/desired_cluster_profile os = "centos_7.4" state = "none" @@ -66,14 +66,14 @@ admin_cidr = "0.0.0.0/0" You can plan the profile with Terraform while referencing: -```bash -make plan +```shell +$ make plan ``` If you are happy with the changes, the you can apply the profile with Terraform while referencing: -```bash -make launch-infra +```shell +$ make launch-infra ``` ## Install DC/OS @@ -82,8 +82,8 @@ Once the components are created, we can run the Ansible script to install DC/OS The setup variables for DC/OS are defined in the file `group_vars/all`. Copy the example file, by running: -``` -cp group_vars/all.example group_vars/all +```shell +$ cp group_vars/all.example group_vars/all ``` The now created file `group_vars/all` is for configuring DC/OS. The variables are explained within the file. @@ -101,40 +101,40 @@ dcos_s3_bucket: 'YOUR_BUCKET_NAME' Ansible also needs to know how to find the instances that got created via Terraform. For that we you run a dynamic inventory script called `./inventory.py`. To use it specify the script with the parameter `-i`. In example, check that all instances are reachable via Ansible: -``` -ansible all -i inventory.py -m ping +```shell +$ ansible all -i inventory.py -m ping ``` Finally, you can install DC/OS by running: -``` -ansible-playbook -i inventory.py plays/install.yml +```shell +$ ansible-playbook -i inventory.py plays/install.yml ``` ## Access the cluster If the installation was successful. You should be able to reach the Master load balancer. You can find the URL of the Master LB with the following command: -``` -make ui +```shell +$ make ui ``` Setup `dcos` cli to access your cluster: -``` -make setup-cli +```shell +$ make setup-cli ``` The terraform script also created a load balancer for the public agents: -``` -make public-lb +```shell +$ make public-lb ``` ## Destroy the cluster To delete the AWS stack run the command: -``` -make destroy +```shell +$ make destroy ``` diff --git a/docs/INSTALL_AZURE.md b/docs/INSTALL_AZURE.md index c31812a..df5401b 100644 --- a/docs/INSTALL_AZURE.md +++ b/docs/INSTALL_AZURE.md @@ -2,25 +2,25 @@ With the following guide, you are able to install a DC/OS cluster on Azure. You need the tools Terraform and Ansible installed. On MacOS, you can use [brew](https://brew.sh/) for that. -``` -brew install terraform -brew install ansible +```shell +$ brew install terraform +$ brew install ansible ``` ## Setup infrastructure ### Pull down the DC/OS Terraform scripts below -```bash -make azure +```shell +$ make azure ``` ### Configure your Azure ssh Keys Set the private key that you will be you will be using to your ssh-agent and set public key in terraform. -```bash -ssh-add ~/.ssh/your_private_azure_key.pem +```shell +$ ssh-add ~/.ssh/your_private_azure_key.pem ``` Add your Azure ssh key to `.deploy/desired_cluster_profile` file: @@ -35,7 +35,7 @@ Follow the Terraform instructions [here](https://www.terraform.io/docs/providers When you've successfully retrieved your output of `az account list`, create a source file to easily run your credentials in the future. -```bash +```shell $ cat ~/.azure/credentials export ARM_TENANT_ID=45ef06c1-a57b-40d5-967f-88cf8example export ARM_CLIENT_SECRET=Lqw0kyzWXyEjfha9hfhs8dhasjpJUIGQhNFExAmPLE @@ -47,7 +47,7 @@ export ARM_SUBSCRIPTION_ID=846d9e22-a320-488c-92d5-41112example Set your environment variables by sourcing the files before you run any terraform commands. -```bash +```shell $ source ~/.azure/credentials ``` @@ -57,7 +57,7 @@ The setup variables for Terraform are defined in the file `.deploy/desired_clust For example, you can see the default configuration of your cluster: -```bash +```shell $ cat .deploy/desired_cluster_profile os = "centos_7.3" state = "none" @@ -77,14 +77,14 @@ admin_cidr = "0.0.0.0/0" You can plan the profile with Terraform while referencing: -```bash -make plan +```shell +$ make plan ``` If you are happy with the changes, the you can apply the profile with Terraform while referencing: -```bash -make launch-infra +```shell +$ make launch-infra ``` ## Install DC/OS @@ -95,8 +95,8 @@ You have to add the private SSH key (defined in Terraform with variable `ssh_key The setup variables for DC/OS are defined in the file `group_vars/all`. Copy the example file, by running: -``` -cp group_vars/all.example group_vars/all +```shell +$ cp group_vars/all.example group_vars/all ``` The now created file `group_vars/all` is for configuring DC/OS. The variables are explained within the file. @@ -112,40 +112,40 @@ dcos_exhibitor_azure_account_key: '******' Ansible also needs to know how to find the instances that got created via Terraform. For that we you run a dynamic inventory script called `./inventory.py`. To use it specify the script with the parameter `-i`. In example, check that all instances are reachable via Ansible: -``` -ansible all -i inventory.py -m ping +```shell +$ ansible all -i inventory.py -m ping ``` Finally, you can install DC/OS by running: -``` -ansible-playbook -i inventory.py plays/install.yml +```shell +$ ansible-playbook -i inventory.py plays/install.yml ``` ## Access the cluster If the installation was successful. You should be able to reach the Master load balancer. You can find the URL of the Master LB with the following command: -``` -make ui +```shell +$ make ui ``` Setup `dcos` cli to access your cluster: -``` -make setup-cli +```shell +$ make setup-cli ``` The terraform script also created a load balancer for the public agents: -``` -make public-lb +```shell +$ make public-lb ``` ## Destroy the cluster To delete the Azure stack run the command: -``` -make destroy +```shell +$ make destroy ``` diff --git a/docs/INSTALL_GCP.md b/docs/INSTALL_GCP.md index 15e0d65..111c4a0 100644 --- a/docs/INSTALL_GCP.md +++ b/docs/INSTALL_GCP.md @@ -2,9 +2,9 @@ With the following guide, you are able to install a DC/OS cluster on GCP. You need the tools Terraform and Ansible installed. On MacOS, you can use [brew](https://brew.sh/) for that. -``` -brew install terraform -brew install ansible +```shell +$ brew install terraform +$ brew install ansible ``` ## Setup infrastructure @@ -19,22 +19,22 @@ brew install ansible Run this command to authenticate to the Google Provider. This will bring down your keys locally on the machine for terraform to use. -```bash +```shell $ gcloud auth login $ gcloud auth application-default login ``` ### Pull down the DC/OS Terraform scripts below -```bash -make gcp +```shell +$ make gcp ``` ### Configure your GCP ssh keys Set the public key that you will be you will be using to your ssh-agent and set public key in terraform. This will allow you to log into to the cluster after DC/OS is deployed and also helps Terraform setup your cluster at deployment time. -```bash +```shell $ ssh-add ~/.ssh/google_compute_engine.pub ``` @@ -58,7 +58,7 @@ The setup variables for Terraform are defined in the file `.deploy/desired_clust For example, you can see the default configuration of your cluster: -```bash +```shell $ cat .deploy/desired_cluster_profile os = "centos_7.3" state = "none" @@ -89,14 +89,14 @@ admin_cidr = "0.0.0.0/0" You can plan the profile with Terraform while referencing: -```bash -make plan +```shell +$ make plan ``` If you are happy with the changes, then you can apply the profile with Terraform while referencing: -```bash -make launch-infra +```shell +$ make launch-infra ``` ## Install DC/OS @@ -105,48 +105,48 @@ Once the components are created, we can run the Ansible script to install DC/OS The setup variables for DC/OS are defined in the file `group_vars/all`. Copy the example file, by running: -``` -cp group_vars/all.example group_vars/all +```shell +$ cp group_vars/all.example group_vars/all ``` The now created file `group_vars/all` is for configuring DC/OS. The variables are explained within the file. Ansible also needs to know how to find the instances that got created via Terraform. For that we you run a dynamic inventory script called `./inventory.py`. To use it specify the script with the parameter `-i`. In example, check that all instances are reachable via Ansible: -``` -ansible all -i inventory.py -m ping +```shell +$ ansible all -i inventory.py -m ping ``` Finally, you can install DC/OS by running: -``` -ansible-playbook -i inventory.py plays/install.yml +```shell +$ ansible-playbook -i inventory.py plays/install.yml ``` ## Access the cluster If the installation was successful. You should be able to reach the Master load balancer. You can find the URL of the Master LB with the following command: -``` -make ui +```shell +$ make ui ``` Setup `dcos` cli to access your cluster: -``` -make setup-cli +```shell +$ make setup-cli ``` The terraform script also created a load balancer for the public agents: -``` -make public-lb +```shell +$ make public-lb ``` ## Destroy the cluster To delete the GCP stack run the command: -``` -make destroy +```shell +$ make destroy ``` diff --git a/docs/INSTALL_KUBERNETES.md b/docs/INSTALL_KUBERNETES.md index 5d553f2..9fd5ed8 100644 --- a/docs/INSTALL_KUBERNETES.md +++ b/docs/INSTALL_KUBERNETES.md @@ -4,18 +4,18 @@ Kubernetes is now available as a DC/OS package to quickly, and reliably run Kube ## Known limitations -Before proceeding, please check the [current Kubernetes package limitations](https://docs.mesosphere.com/service-docs/kubernetes/1.0.1-1.9.4/limitations/). +Before proceeding, please check the [current Kubernetes package limitations](https://docs.mesosphere.com/service-docs/kubernetes/1.0.2-1.9.6/limitations/). ## Pre-Requisites -Make sure your cluster fulfils the [Kubernetes package default requirements](https://docs.mesosphere.com/service-docs/kubernetes/1.0.1-1.9.4/install/#prerequisites/). +Make sure your cluster fulfils the [Kubernetes package default requirements](https://docs.mesosphere.com/service-docs/kubernetes/1.0.2-1.9.6/install/#prerequisites/). ### Download command-line tools If you haven't already, please download DC/OS client, `dcos` and Kubernetes client, `kubectl`: -```bash +```shell $ make get-cli ``` @@ -29,13 +29,13 @@ You are now ready to install the Kubernetes package. For DC/OS Open cluster run: -```bash +```shell $ make install-k8s ``` For DC/OS Enterprise cluster run: -```bash +```shell $ make install-k8s-ee ``` @@ -45,7 +45,7 @@ Wait until all tasks are running before trying to access the Kubernetes API. You can watch the progress what was deployed so far with: -```bash +```shell $ watch dcos kubernetes plan show deploy ``` @@ -97,19 +97,19 @@ deploy (serial strategy) (COMPLETE) In order to access the Kubernetes API from outside the DC/OS cluster, one needs to configure `kubectl`, the Kubernetes CLI tool: -```bash +```shell $ dcos kubernetes kubeconfig ``` Let's test accessing the Kubernetes API and list the Kubernetes cluster nodes: -```bash +```shell $ kubectl get nodes NAME STATUS ROLES AGE VERSION -kube-node-0-kubelet.kubernetes.mesos Ready 8m v1.9.4 -kube-node-1-kubelet.kubernetes.mesos Ready 8m v1.9.4 -kube-node-2-kubelet.kubernetes.mesos Ready 8m v1.9.4 -kube-node-public-0-kubelet.kubernetes.mesos Ready 7m v1.9.4 +kube-node-0-kubelet.kubernetes.mesos Ready 8m v1.9.6 +kube-node-1-kubelet.kubernetes.mesos Ready 8m v1.9.6 +kube-node-2-kubelet.kubernetes.mesos Ready 8m v1.9.6 +kube-node-public-0-kubelet.kubernetes.mesos Ready 7m v1.9.6 ``` ### Using kubectl proxy @@ -117,20 +117,24 @@ kube-node-public-0-kubelet.kubernetes.mesos Ready 7m v1.9 For running more advanced commands such as `kubectl proxy`, an SSH tunnel is still required. To create the tunnel, run: -```bash +```shell $ make kubectl-tunnel ``` If `kubectl` is properly configured and the tunnel established successfully, in another terminal you should now be able to run `kubectl proxy` as well as any other command. +## Upgrading kubernetes + +Check [Kubernetes upgrade doc](UPGRADE_KUBERNETES.md). + ## Uninstall Kubernetes To uninstall the DC/OS Kubernetes package run: -```bash +```shell $ make uninstall-k8s ``` ## Documentation -For more details, please check the official [Kubernetes package docs](https://docs.mesosphere.com/service-docs/kubernetes/1.0.1-1.9.4). +For more details, please check the official [Kubernetes package docs](https://docs.mesosphere.com/service-docs/kubernetes/1.0.2-1.9.6). diff --git a/docs/INSTALL_ONPREM.md b/docs/INSTALL_ONPREM.md index 770cabb..2e371a9 100644 --- a/docs/INSTALL_ONPREM.md +++ b/docs/INSTALL_ONPREM.md @@ -3,8 +3,8 @@ With the following guide, you are able to install a DC/OS cluster on premises. You need the Ansible tool installed. On MacOS, you can use [brew](https://brew.sh/) for that. -``` -brew install ansible +```shell +$ brew install ansible ``` Execute `ssh-add {keypair}.pem` to be able to access your cluster nodes via SSH @@ -63,20 +63,20 @@ all: The setup variables for DC/OS are defined in the file `group_vars/all`. Copy the example file, by running: -``` -cp group_vars/all.example group_vars/all +```shell +$ cp group_vars/all.example group_vars/all ``` The now created file `group_vars/all` is for configuring DC/OS. You have to fill in the variables that match your preferred configuration. The variables are explained within the file. To check that all instances are reachable via Ansible, run the following: -``` -ansible all -m ping +```shell +$ ansible all -m ping ``` Finally, you can install DC/OS by applying the Absible playbook: -``` -ansible-playbook plays/install.yml +```shell +$ ansible-playbook plays/install.yml ``` diff --git a/docs/UPGRADE_DCOS.md b/docs/UPGRADE_DCOS.md index d5d4c65..5d1194d 100644 --- a/docs/UPGRADE_DCOS.md +++ b/docs/UPGRADE_DCOS.md @@ -2,28 +2,28 @@ In order to upgrade a cluster, you have to set the download URL for the target version of DC/OS inside of the file `group_vars/all`. So for example if you want to upgrade to DC/OS 1.11.1, specify the download URL for this version within the variable `dcos_download`. -``` -dcos_download: https://downloads.dcos.io/dcos/stable/1.11.1/dcos_generate_config.sh +```shell +$ dcos_download: https://downloads.dcos.io/dcos/stable/1.11.1/dcos_generate_config.sh ``` You also need to specify the DC/OS version that is currently running on the cluster within the variable `dcos_upgrade_from_version`: -``` -dcos_upgrade_from_version: '1.11.0' +```shell +$ dcos_upgrade_from_version: '1.11.0' ``` ## On-Premises upgrade To start the upgrade trigger the play `plays/upgrade.yml` and -``` -ansible-playbook plays/upgrade.yml +```shell +$ ansible-playbook plays/upgrade.yml ``` ## Cloud Providers upgrade To start the upgrade trigger the play `plays/upgrade.yml` and specify the DC/OS version that is currently running on the cluster as the variable `installed_cluster_version`. The command for that is: -``` -ansible-playbook -i inventory.py plays/upgrade.yml +```shell +$ ansible-playbook -i inventory.py plays/upgrade.yml ``` diff --git a/docs/UPGRADE_KUBERNETES.md b/docs/UPGRADE_KUBERNETES.md new file mode 100644 index 0000000..ee8a8cf --- /dev/null +++ b/docs/UPGRADE_KUBERNETES.md @@ -0,0 +1,85 @@ +# Kubernetes upgrade + +## Updating + +In order to update the package, the `dcos kubernetes update` subcommand +is available. + +```shell +$ dcos kubernetes update -h +usage: dcos kubernetes [] update [] + +Flags: + -h, --help Show context-sensitive help. + -v, --verbose Enable extra logging of requests/responses + --name="kubernetes" Name of the service instance to query + --options=OPTIONS Path to a JSON file containing the target package options + --package-version=PACKAGE-VERSION + The target package version + --yes Do not ask for confirmation before starting the update process + --timeout=1200s Maximum time to wait for the update process to complete + +``` + +### Updating the package version + +Before starting the update process, it is recommended to install the CLI +of the new package version: + +```shell +$ dcos package install kubernetes --cli --package-version= +``` + +#### Kubernetes on DC/OS Enterprise Edition + +Below is how the user starts the package version update: + +```shell +$ dcos kubernetes update --package-version= +About to start an update from version to + +Updating these components means the Kubernetes cluster may experience some +downtime or, in the worst-case scenario, cease to function properly. +Before updating proceed cautiously and always backup your data. + +This operation is long-running and has to run to completion. +Are you sure you want to continue? [yes/no] yes + +2018/03/01 15:40:14 starting update process... +2018/03/01 15:40:15 waiting for update to finish... +2018/03/01 15:41:56 update complete! +``` + +#### Kubernetes on DC/OS Open Edition + +In contrast to the Enterprise edition, the package upgrade requires some additional +steps to achieve the same result. + +First, export the current package configuration into a JSON file called `config.json`: + +```shell +$ dcos kubernetes describe > config.json +``` + +In order to upgrade in a non-destructive manner, first remove the DC/OS Kubernetes +scheduler by running: + +```shell +$ dcos marathon app remove /kubernetes +``` + +And then install the new version of the package: + +```shell +$ dcos package install kubernetes --package-version= --options=config.json +``` + +You can watch the upgrade progress with: + +```shell +$ watch dcos kubernetes plan show deploy +``` + +## Documentation + +For more details, please check the official [Kubernetes package upgrade doc](https://docs.mesosphere.com/services/kubernetes/1.0.2-1.9.6/upgrade/#updating-the-package-version/).