Skip to content

Commit

Permalink
Merge pull request #100 from snowdrop/wip-3.10
Browse files Browse the repository at this point in the history
Update project to Openshift 3.10
  • Loading branch information
geoand authored Aug 21, 2018
2 parents b77fd67 + 8489f5a commit 85aa65a
Show file tree
Hide file tree
Showing 35 changed files with 321 additions and 195 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,7 @@ summarize and present the possibilities offered:

```bash
cd minishift
./bootstrap_vm.sh true 3.9.0
./bootstrap_vm.sh true 3.10.0
```

**NOTE** : The caching option can be used in order to export the docker images locally, which will speed up the bootstrap process next time you recreate the OpenShift virtual machine / installation.
Expand Down
17 changes: 15 additions & 2 deletions ansible/README-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
```bash
echo "#### Git clone openshift ansible"
if [ ! -d "openshift-ansible" ]; then
git clone -b release-3.9 https://github.com/openshift/openshift-ansible.git
git clone -b release-3.10 https://github.com/openshift/openshift-ansible.git
fi
```

Expand Down Expand Up @@ -57,4 +57,17 @@
If the `ansible_user` that is has been set in the inventory is not `root`, then the `--become` flag needs to be added to both
of the above commands

**REMARK** : Customization of the installation (inventory file generated) is possible by changing the variables found in `inventory/cloud_host` from the command line using Ansible's `-e` syntax.
**REMARK** : Customization of the installation (inventory file generated) is possible by changing the variables found in `inventory/cloud_host` from the command line using Ansible's `-e` syntax.

- Setup DNS

Execute

```bash
ansible-playbook -i inventory/cloud_host playbook/dns.yml
```

If the `ansible_user` that is has been set in the inventory is not `root`, then the `--become` flag needs to be added to both
of the above commands

Check out the [docs](https://docs.okd.io/latest/install/prerequisites.html#prereq-dns) to see more about why this is needed.
2 changes: 1 addition & 1 deletion ansible/README-post-installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,5 +207,5 @@ ansible-playbook -i inventory/cloud_host playbook/post_installation.yml -e opens

To install the service catalog, execute this command
```bash
ansible-playbook -i inventory/cloud_host openshift-ansible/playbooks/openshift-service-catalog/config.yml
ansible-playbook -i inventory/cloud_host openshift-ansible/playbooks/openshift-service-catalog/config.yml -e ansible_service_broker_install=true
```
6 changes: 6 additions & 0 deletions ansible/playbook/dns.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
- hosts: "{{ openshift_node | default('masters') }}"
gather_facts: true

roles:
- { role: dns }
5 changes: 1 addition & 4 deletions ansible/playbook/roles/add_extra_users/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
- set_fact:
path_users_pwd: "{{ openshift_env.config_dir }}/{{ users_pwd_file }}"

- name: Add user/pwd to file
command: "htpasswd -b {{ path_users_pwd }} user{{ item }} pwd{{ item }}"
command: "htpasswd -b {{ openshift_env.htpasswd_file }} user{{ item }} pwd{{ item }}"
with_sequence: start={{ first_extra_user_offset }} count={{ number_of_extra_users }} format=%02d

- block:
Expand Down
1 change: 0 additions & 1 deletion ansible/playbook/roles/add_extra_users/vars/main.yml

This file was deleted.

26 changes: 4 additions & 22 deletions ansible/playbook/roles/cluster/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ openshift_github_name: origin
openshift_github_url: https://api.github.com/repos

# Openshift parameters
openshift_release_tag_name: "v3.9.0"
openshift_release_tag_name: "v3.10.0"

openshift_client_dest: /usr/local/bin
openshift_force_client_install: false
Expand All @@ -17,28 +17,10 @@ cluster_ip_address: "{{ public_ip_address }}" # Re use the inventory ip address
cluster_use_existing_config: true
cluster_log_level: 1

# Switch to docker folder config dir as it uses a different path and not the onde defined for oc cluster up
cluster_host_config_dir: /var/lib/origin/openshift.local.config
cluster_host_data_dir: /var/lib/openshift/data
cluster_host_volumes_dir: /var/lib/openshift/volumes
cluster_host_pv_dir: /var/lib/openshift/pv

# Features
cluster_service_catalog: false
cluster_metrics: false
cluster_logging: false

openshift_up_options: '
--version={{ openshift_release_tag_name }}
--host-config-dir={{ cluster_host_config_dir }}
--host-data-dir={{ cluster_host_data_dir }}
--host-volumes-dir={{ cluster_host_volumes_dir }}
--host-pv-dir={{ cluster_host_pv_dir }}
--use-existing-config={{ cluster_use_existing_config }}
--enable=[automation-service-broker, centos-imagestreams, persistent-volumes, registry, rhel-imagestreams, router, service-catalog, template-service-broker, web-console]
--write-config=true
--public-hostname={{ cluster_ip_address }}
--routing-suffix={{ cluster_ip_address }}.nip.io
--loglevel={{ cluster_log_level }}
--service-catalog={{ cluster_service_catalog }}
--logging={{ cluster_logging }}
--metrics={{ cluster_metrics }}'
--server-loglevel={{ cluster_log_level }}'

6 changes: 5 additions & 1 deletion ansible/playbook/roles/cluster/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,15 @@
debug:
msg: "Parameters : {{ openshift_up_options }}"

- name: Run oc cluster up
- name: Enable options
command: "oc cluster up {{ openshift_up_options }}"
register: clusterupout
ignore_errors: yes

- name: Launch cluster
command: "oc cluster up"
register: clusterupout

- name: Cluster up
include_tasks: cluster_up.yml

Expand Down
5 changes: 1 addition & 4 deletions ansible/playbook/roles/delete_extra_users/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
- set_fact:
path_users_pwd: "{{ openshift_env.config_dir }}/{{ users_pwd_file }}"

- name: Delete user from password file
command: "htpasswd -D {{ path_users_pwd }} user{{ item }}"
command: "htpasswd -D {{ openshift_env.htpasswd_file }} user{{ item }}"
with_sequence: start={{ first_extra_user_offset }} count={{ number_of_extra_users }} format=%02d

- name: Delete user project
Expand Down
1 change: 0 additions & 1 deletion ansible/playbook/roles/delete_extra_users/vars/main.yml

This file was deleted.

9 changes: 9 additions & 0 deletions ansible/playbook/roles/dns/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
- name: Generate dnsmasq template
template:
src: local.conf.j2
dest: /etc/dnsmasq.d/local.conf

- name: restart dnsmasq
systemd:
state: restarted
name: dnsmasq
1 change: 1 addition & 0 deletions ansible/playbook/roles/dns/templates/local.conf.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
address=/{{ ansible_hostname }}/{{ ansible_default_ipv4.address }}
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# Release version of Openshift origin to be installed. IT will be used to configure the inventory file from the j2 template
openshift_origin_version: 3.9
openshift_origin_version: "3.10"
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,15 @@ ansible_ssh_private_key_file={{ keyfile }}
public_ip_address={{ ip_address }}
host_key_checking=false

containerized=true
openshift_enable_excluders=false
openshift_release=v{{ openshift_origin_version }}
openshift_release="{{ openshift_origin_version }}"

openshift_deployment_type=origin

openshift_additional_repos=[{'id': 'origin-repo', 'name': 'Origin-RPMs', 'baseurl': 'https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_gce_310/27/artifacts/rpms', 'enabled': 1, 'gpgcheck': 0}]

openshift_hostname={{ hostname if (hostname is defined) else ip_address }}
openshift_master_cluster_public_hostname={{ ip_address }}
openshift_master_default_subdomain={{ ip_address }}.nip.io
openshift_master_unsupported_embedded_etcd=true

# To avoid message
# - Available disk space in "/var" (9.5 GB) is below minimum recommended (40.0 GB)
Expand All @@ -31,21 +29,24 @@ openshift_master_unsupported_embedded_etcd=true
# - Docker version is higher than expected
openshift_disable_check = docker_storage,memory_availability,disk_availability,docker_image_availability,package_version

# we need to increase the pods per core because we might temporarily have multiple build pods running at the same time
openshift_node_kubelet_args={'pods-per-core': ['20']}

# ASB Service Catalog
openshift_enable_service_catalog=false
ansible_service_broker_registry_whitelist=[".*-apb$"]

# Python Interpreter
ansible_python_interpreter=/usr/bin/python

# Enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/users.htpasswd'}]
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# htpasswd -nb admin admin
openshift_master_htpasswd_users={'admin': '$apr1$DloeoaY3$nqbN9fQBkyXgbj58buqEM.'}

openshift_node_groups=[{'name': 'node-config-all-in-one', 'labels': ['node-role.kubernetes.io/master=true', 'node-role.kubernetes.io/infra=true', 'node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'kubeletArguments.pods-per-core','value': ['20']}]}]

# Used to control the image version of origin-ansible-service-broker
# see: https://github.com/openshift/openshift-ansible/blob/release-3.10/roles/ansible_service_broker/defaults/main.yml
# Note: the variable seems to not be used currently, but that is most likely a bug
ansible_service_broker_image_tag=1.2

# don't install ASB be default
ansible_service_broker_install=false


{% set host_info = 'localhost ansible_connection=local' if ((use_local is defined) and (use_local | bool)) else ip_address %}
{% set private_ip_address = private_ip_address if (private_ip_address is defined) else ip_address %}
Expand All @@ -62,4 +63,4 @@ openshift_master_htpasswd_users={'admin': '$apr1$DloeoaY3$nqbN9fQBkyXgbj58buqEM.
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
{{ host_info }} openshift_node_labels="{'region':'infra','zone':'default', 'node-role.kubernetes.io/compute': 'true'}" openshift_public_hostname={{ ip_address }} openshift_hostname={{ private_ip_address }}
{{ host_info }} openshift_node_group_name="node-config-all-in-one" openshift_public_hostname={{ ip_address }} openshift_hostname={{ private_ip_address }}
15 changes: 6 additions & 9 deletions ansible/playbook/roles/identity_provider/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,29 +8,26 @@
- "openshift_admin_pwd != ''"
msg: "Please specify an password that will be used for the cluster using the 'openshift_admin_pwd' variable. The easiest way to do is to add -e openshift_admin_pwd=foopass to your Ansible CLI invocation"

- set_fact:
path_users_pwd: "{{ openshift_env.config_dir }}/{{ users_pwd_file }}"

- name: Install httpd-tools package if not there
yum:
name: httpd-tools
state: present

- stat:
path: "{{ path_users_pwd }}"
path: "{{ openshift_env.htpasswd_file }}"
register: p

- name: Create users.htpasswd file
- name: Create htpasswd file if needed
file:
path: "{{ path_users_pwd }}"
path: "{{ openshift_env.htpasswd_file }}"
state: touch
owner: root
group: root
mode: 0644
when: not p.stat.exists

- name: Create user/pwd
command: "htpasswd -b {{ path_users_pwd }} {{ openshift_admin_user}} {{ openshift_admin_pwd }}"
command: "htpasswd -b {{ openshift_env.htpasswd_file }} {{ openshift_admin_user}} {{ openshift_admin_pwd }}"

- name: Generate patch using template
template:
Expand All @@ -42,11 +39,11 @@
register: patch

- name: Patch master-config file to use HTPasswdPasswordIdentityProvider
command: "oc ex config patch {{ openshift_env.config_dir }}/master/master-config.yaml --patch '{{ patch.stdout }}'"
command: "oc ex config patch {{ openshift_env.master_config_file }} --patch '{{ patch.stdout }}'"
register: r

- name: Copy patch to new master config file
copy: content="{{ r.stdout }}" dest={{ openshift_env.config_dir }}/master/master-config.yaml
copy: content="{{ r.stdout }}" dest={{ openshift_env.master_config_file }}

- name: Restart master
include_role:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@
"provider": {
"apiVersion": "v1",
"kind": "HTPasswdPasswordIdentityProvider",
"file": "{{ path_users_pwd }}"
"file": "{{ openshift_env.htpasswd_file }}"
}
}
]
}
}
}
1 change: 0 additions & 1 deletion ansible/playbook/roles/identity_provider/vars/main.yml

This file was deleted.

9 changes: 1 addition & 8 deletions ansible/playbook/roles/install_istio/defaults/main.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1 @@
# Repository where the project will be cloned from
istio_git_repo: https://github.com/istio/istio.git

# Istio github branch to be used to install the istio playbook
istio_git_branch: master

# Folder where the project will be cloned on your machine
istio_repo_dest: ~/.istio/playbooks
istio_authentication: false
43 changes: 43 additions & 0 deletions ansible/playbook/roles/install_istio/tasks/install_operator.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
- name: Generate Istio Operator Template
template:
src: istio_community_operator_template.yaml.j2
dest: /tmp/istio_community_operator_template.yaml

- name: Generate Istio CRD file
template:
src: istio_crd.yaml.j2
dest: /tmp/istio_crd.yaml

- name: Create namespace
command: oc {{ openshift_env.oc_admin_kubeconfig_arg }} new-project istio-operator

- name: Create Istio Operator
command: oc {{ openshift_env.oc_admin_kubeconfig_arg }} new-app -f /tmp/istio_community_operator_template.yaml -n istio-operator

- name: Wait for Istio Operator to run
command: oc {{ openshift_env.oc_admin_kubeconfig_arg }} get pods --field-selector status.phase=Running -l name=istio-operator -o jsonpath='{.items[0].metadata.name}'
register: operator
until: operator.rc == 0
delay: 10
retries: 20

- name: Create Istio CRD
command: oc {{ openshift_env.oc_admin_kubeconfig_arg }} create -f /tmp/istio_crd.yaml -n istio-operator

- name: Wait for the Istio CRD to be created
command: oc {{ openshift_env.oc_admin_kubeconfig_arg }} get crd installations.istio.openshift.com -n istio-operator
register: crd
until: crd.rc == 0
delay: 5
retries: 5

- name: Wait for pod that performs the actual installation of Istio to run
command: oc {{ openshift_env.oc_admin_kubeconfig_arg }} get pods --field-selector status.phase=Running -l job-name=openshift-ansible-istio-installer-job -o jsonpath='{.items[0].metadata.name}' -n istio-system
register: install
until: install.rc == 0
delay: 10
retries: 20




43 changes: 1 addition & 42 deletions ansible/playbook/roles/install_istio/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -1,44 +1,3 @@
- include_tasks: patch_master.yml

- name: Clone Istio project
git:
repo: "{{ istio_git_repo }}"
dest: "{{ istio_repo_dest }}"
version: "{{ istio_git_branch }}"
force: yes
connection: local

- debug:
msg: "Istio git project {{ istio_git_repo }}/{{ istio_git_branch }} cloned to {{ istio_repo_dest }}"

- name: Config used to install Istio
debug:
msg:
- "Git istio repo: {{ istio_git_repo }}"
- "Git istio branch: {{ istio_git_branch }}"
- "Git istio download directory: {{ istio_repo_dest }}"
- "Git istio playbook directory: {{ istio_repo_dest }}/install/ansible"
- "Istio version to be installed: {{ istio.release_tag_name }}"
- "Using TLS: {{ istio.auth }}"
- "Target cloud platform: {{ cluster_flavour }}"
- "Destination to install istio client, examples,...: {{ istio.dest }}"
- "Namespace where istio will be installed: {{ istio.namespace }}"
- "Addons: {{ istio.addon }}"
- "Install samples: {{ istio.samples }}"

- name: Create temporary symlink to istio playbook
file:
src: "{{ istio_repo_dest }}/install/ansible"
dest: "istio_playbook_src"
state: link
connection: local

- name: Execute the Istio Role
include_role:
name: istio_playbook_src/istio

- name: Delete temporary symlink to istio playbook
file:
path: "istio_playbook_src"
state: absent
connection: local
- include_tasks: install_operator.yml
Loading

0 comments on commit 85aa65a

Please sign in to comment.