Skip to content

Commit

Permalink
Merge branch 'pr/12' into devel
Browse files Browse the repository at this point in the history
Conflicts:
	roles/cinder_api/templates/cinder.conf-2.j2
	roles/cinder_api/templates/cinder.conf.j2
	roles/cinder_volume_ceph/templates/cinder.conf-2.j2
	roles/cinder_volume_ceph/templates/cinder.conf.j2
	roles/compute_controller/templates/nova.conf-2.j2
	roles/compute_controller/templates/nova.conf.j2
	roles/glance/templates/glance-api.conf.j2
	roles/glance/templates/glance-registry.conf.j2
	roles/haproxy/tasks/configure_haproxy.yml
  • Loading branch information
taiojia committed Aug 5, 2015
2 parents cce78ef + ade96e9 commit bca81ed
Show file tree
Hide file tree
Showing 12 changed files with 16 additions and 204 deletions.
198 changes: 5 additions & 193 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,196 +23,6 @@ The inventory file at `inventory/inventory`, the default setting is the Vagrant
The `vars/openstack/openstack.yml` is all the parameters.
* openstack.yml

#### Configure storage network
playback openstack_interfaces.yml --extra-vars \"node_name=controller01 storage_ip=192.168.1.12 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255\"

#### To deploy OpenStack Basic environment including NTP and OpenStack repository
playback openstack_basic_environment.yml

#### To deploy database and messaging queues
playback openstack_basic_database_messaging_single.yml

#### To deploy Keystone
playback openstack_keystone.yml

#### To deploy Glance
The Glance default store is file.

playback openstack_glance.yml

#### To deploy a compute controller
playback openstack_compute_controller.yml

#### To deploy compute nodes
playback openstack_compute_node.yml --extra-vars \"compute_name=compute1 compute_ip=172.16.33.7\"

#### To deploy a neutron controller(GRE Only)
playback openstack_neutron_controller.yml --extra-vars \"nova_admin_service_tenant_id=6aea60400e6246edaa83d508b222d2eb\"

###### To obtain the service tenant identifier (id) at the controller
$ source admin-openrc.sh
$ keystone tenant-get service
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | 6aea60400e6246edaa83d508b222d2eb |
| name | service |
+-------------+----------------------------------+
### To deploy a neutron node(GRE Only)
playback openstack_neutron_node.yml --extra-vars \"local_tunnel_net_ip=192.168.11.6\"

### To deploy a neutron compute(GRE Only)
playback openstack_neutron_compute.yml --extra-vars \"compute_name=compute1 compute_ip=172.16.33.7 local_tunnel_net_ip=192.168.11.7\"

### Initial networks(GRE Only)
playback openstack_initial_networks.yml

### Add Dashboard
playback openstack_horizon.yml

### To deploy a cinder controller
playback openstack_cinder_controller.yml

### To deploy a cinder-volume on each cinder storage node (LVM Only)
playback openstack_cinder_volume_lvm.yml --extra-vars \"my_ip=172.16.33.5\"

### To deploy a swift proxy
playback openstack_swift_controller.yml


### To deploy a swift storage

playback openstack_swift_node.yml --extra-vars \"swift_storage_name=swiftstore1 my_management_ip=172.16.33.8 my_storage_network_ip=172.16.44.8\"

### Initial swift rings
playback openstack_swift_builder_file.yml
playback openstack_swift_add_node_to_the_ring.yml --extra-vars \"swift_storage_mgmt_ip=172.16.33.8 device_name=sdb1 device_weight=100\"
playback openstack_swift_add_node_to_the_ring.yml --extra-vars \"swift_storage_mgmt_ip=172.16.33.8 device_name=sdc1 device_weight=100\"
playback openstack_swift_add_node_to_the_ring.yml --extra-vars \"swift_storage_mgmt_ip=172.16.33.9 device_name=sdb1 device_weight=100\"
playback openstack_swift_add_node_to_the_ring.yml --extra-vars \"swift_storage_mgmt_ip=172.16.33.9 device_name=sdc1 device_weight=100\"
playback openstack_swift_add_node_to_the_ring.yml --extra-vars \"swift_storage_mgmt_ip=172.16.33.10 device_name=sdb1 device_weight=100\"
playback openstack_swift_add_node_to_the_ring.yml --extra-vars \"swift_storage_mgmt_ip=172.16.33.10 device_name=sdc1 device_weight=100\"
playback openstack_swift_rebalance_ring.yml

### Distribute ring configuration files
Copy the `account.ring.gz`, `container.ring.gz`, and `object.ring.gz` files to the `/etc/swift` directory on each storage node and any additional nodes running the proxy service.

### Finalize swift installation
playback openstack_swift_finalize_installation.yml --extra-vars \"hosts=swift_proxy\"
playback openstack_swift_finalize_installation.yml --extra-vars \"hosts=swift_storage\"

### Switch glance backend
Switch glance backend to swift

playback openstack_switch_glance_backend_to_swift.yml

Switch glance backend to file

playback openstack_switch_glance_backend_to_file.yml

### To deploy the Orchestration components(heat)
playback openstack_heat_controller.yml
### To deploy the Ceph admin node
Ensure the admin node must be have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.

Each Ceph node have the ceph user

adduser ceph
echo ceph ALL = \(root\) NOPASSWD:ALL | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph

Copy SSH public key to each Ceph node from Ceph admin node

ssh-keygen
ssh-copy-id ceph@node

Deploy the Ceph admin node

playback openstack_ceph_admin.yml -u ceph

### To deploy the Ceph initial monitor
playback openstack_ceph_initial_mon.yml -u ceph

### To deploy the Ceph clients
playback openstack_ceph_client.yml -u username --extra-vars \"client=maas\"
playback openstack_ceph_client.yml -u username --extra-vars \"client=compute01\"
playback openstack_ceph_client.yml -u username --extra-vars \"client=compute02\"
playback openstack_ceph_client.yml -u username --extra-vars \"client=compute03\"
playback openstack_ceph_client.yml -u username --extra-vars \"client=compute04\"
playback openstack_ceph_client.yml -u username --extra-vars \"client=compute05\"
playback openstack_ceph_client.yml -u username --extra-vars \"client=controller01\"
playback openstack_ceph_client.yml -u username --extra-vars \"client=controller02\"

### To add Ceph initial monitor(s) and gather the keys
playback openstack_ceph_gather_keys.yml -u ceph

### To add Ceph OSDs
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute01 disk=sdb partition=sdb1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute01 disk=sdc partition=sdc1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute02 disk=sdb partition=sdb1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute02 disk=sdc partition=sdc1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute03 disk=sdb partition=sdb1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute03 disk=sdc partition=sdc1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute04 disk=sdb partition=sdb1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute04 disk=sdc partition=sdc1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute05 disk=sdb partition=sdb1\"
playback openstack_ceph_osd.yml -u ceph --extra-vars \"node=compute05 disk=sdc partition=sdc1\"

### To add Ceph monitors
playback openstack_ceph_mon.yml -u ceph --extra-vars \"node=compute01\"
playback openstack_ceph_mon.yml -u ceph --extra-vars \"node=compute02\"
playback openstack_ceph_mon.yml -u ceph --extra-vars \"node=compute03\"
playback openstack_ceph_mon.yml -u ceph --extra-vars \"node=compute04\"
playback openstack_ceph_mon.yml -u ceph --extra-vars \"node=compute05\"

### To copy the Ceph keys to nodes
Copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

playback openstack_ceph_copy_keys.yml -u ceph --extra-vars \"node=compute01\"
playback openstack_ceph_copy_keys.yml -u ceph --extra-vars \"node=compute02\"
playback openstack_ceph_copy_keys.yml -u ceph --extra-vars \"node=compute03\"
playback openstack_ceph_copy_keys.yml -u ceph --extra-vars \"node=compute04\"
playback openstack_ceph_copy_keys.yml -u ceph --extra-vars \"node=compute05\"
playback openstack_ceph_copy_keys.yml -u ceph --extra-vars \"node=controller01\"
playback openstack_ceph_copy_keys.yml -u ceph --extra-vars \"node=controller02\"

### Create the cinder ceph user and pool name
playback openstack_ceph_cinder_pool_user.yml -u ceph

Copy the ceph.client.cinder.keyring from ceph-admin node to /etc/ceph/ceph.client.cinder.keyring of cinder volume node to using the ceph client.

ssh ubuntu@controller01 sudo mkdir /etc/ceph
ceph auth get-or-create client.cinder | ssh ubuntu@controller01 sudo tee /etc/ceph/ceph.client.cinder.keyring

### Install cinder-volume on controller node(Ceph Only)
playback openstack_cinder_volume_ceph.yml

### Install Legacy networking nova-network(FlatDHCP Only)
playback openstack_nova_network_controller.yml
playback openstack_nova_network_compute.yml --extra-vars \"compute_name=compute1 compute_ip=172.16.33.7\"

Create initial network. For example, using an exclusive slice of 203.0.113.0/24 with IP address range 203.0.113.24 to 203.0.113.32:

nova network-create demo-net --bridge br100 --multi-host T --fixed-range-v4 203.0.113.24/29
nova floating-ip-bulk-create --pool demo-net 10.32.151.65/26
nova floating-ip-bulk-list

Extend the demo-net pool:

nova floating-ip-bulk-create --pool demo-net 10.32.151.129/26
nova floating-ip-bulk-list

### Apt mirror
For maas nodes:

playback openstack_maas_apt_mirror.yml

For cloud instances:

playback openstack_cloud_apt_mirror.yml


# For OpenStack HA
Expand Down Expand Up @@ -266,8 +76,8 @@ Each of the swift nodes, /dev/sdb1 and /dev/sdc1, must contain a suitable partit

### Swift Proxy
playback openstack_swift_proxy.yml --extra-vars \"host=controller01\" -vvvv
playback openstack_swift_proxy.yml --extra-vars \"host=controller02\" -vvvv

playback openstack_swift_proxy.yml --extra-vars \"host=controller02\" -vvvv
### Initial swift rings
playback openstack_swift_builder_file.yml -vvvv
playback openstack_swift_add_node_to_the_ring.yml --extra-vars \"swift_storage_storage_ip=192.168.1.16 device_name=sdb1 device_weight=100\" -vvvv
Expand Down Expand Up @@ -380,6 +190,8 @@ Copy the ceph.client.cinder.keyring from ceph-admin node to /etc/ceph/ceph.clien
playback openstack_compute_node.yml --extra-vars \"host=compute05 my_ip=10.32.151.12\" -vvvv
playback openstack_compute_node.yml --extra-vars \"host=compute06 my_ip=10.32.151.14\" -vvvv
playback openstack_compute_node.yml --extra-vars \"host=compute07 my_ip=10.32.151.23\" -vvvv



### Install Legacy networking nova-network(FlatDHCP Only)
playback openstack_nova_network_compute.yml --extra-vars \"host=compute01 my_ip=10.32.151.16\" -vvvv
Expand All @@ -405,4 +217,4 @@ Extend the demo-net pool:
### Orchestration components(heat)
playback openstack_heat_controller.yml --extra-vars \"host=controller01\" -vvvv
playback openstack_heat_controller.yml --extra-vars \"host=controller02\" -vvvv
2 changes: 1 addition & 1 deletion roles/cinder_api/templates/cinder.conf-2.j2
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ my_ip = {{ controller02_mgmt_ip }}
glance_host = {{ glance_host }}

[database]
connection = mysql://cinder:{{ cinder_db_password }}@{{ VIP_DB }}/cinder
connection = mysql://cinder:{{ cinder_db_password }}@127.0.0.1/cinder

[keystone_authtoken]
auth_uri = http://{{ VIP_MGMT }}:5000/v2.0
Expand Down
2 changes: 1 addition & 1 deletion roles/cinder_api/templates/cinder.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ my_ip = {{ controller01_mgmt_ip }}
glance_host = {{ glance_host }}

[database]
connection = mysql://cinder:{{ cinder_db_password }}@{{ VIP_DB }}/cinder
connection = mysql://cinder:{{ cinder_db_password }}@127.0.0.1/cinder

[keystone_authtoken]
auth_uri = http://{{ VIP_MGMT }}:5000/v2.0
Expand Down
2 changes: 1 addition & 1 deletion roles/cinder_volume_ceph/templates/cinder.conf-2.j2
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ my_ip = {{ controller02_mgmt_ip }}
glance_host = {{ glance_host }}

[database]
connection = mysql://cinder:{{ cinder_db_password }}@{{ VIP_DB }}/cinder
connection = mysql://cinder:{{ cinder_db_password }}@127.0.0.1/cinder

[keystone_authtoken]
auth_uri = http://{{ VIP_MGMT }}:5000/v2.0
Expand Down
2 changes: 1 addition & 1 deletion roles/cinder_volume_ceph/templates/cinder.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ my_ip = {{ controller01_mgmt_ip }}
glance_host = {{ glance_host }}

[database]
connection = mysql://cinder:{{ cinder_db_password }}@{{ VIP_DB }}/cinder
connection = mysql://cinder:{{ cinder_db_password }}@127.0.0.1/cinder

[keystone_authtoken]
auth_uri = http://{{ VIP_MGMT }}:5000/v2.0
Expand Down
2 changes: 1 addition & 1 deletion roles/compute_controller/templates/nova.conf-2.j2
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ network_api_class = nova.network.api.API
security_group_api = nova

[database]
connection = mysql://nova:{{ nova_db_password }}@{{ VIP_DB }}/nova
connection = mysql://nova:{{ nova_db_password }}@127.0.0.1/nova

[keystone_authtoken]
auth_uri = {{ auth_uri }}
Expand Down
2 changes: 1 addition & 1 deletion roles/compute_controller/templates/nova.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ network_api_class = nova.network.api.API
security_group_api = nova

[database]
connection = mysql://nova:{{ nova_db_password }}@{{ VIP_DB }}/nova
connection = mysql://nova:{{ nova_db_password }}@127.0.0.1/nova

[keystone_authtoken]
auth_uri = {{ auth_uri }}
Expand Down
2 changes: 1 addition & 1 deletion roles/glance/templates/glance-api.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ backend = sqlalchemy
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
connection = mysql://glance:{{ glance_db_password }}@{{ VIP_DB }}/glance
connection = mysql://glance:{{ glance_db_password }}@127.0.0.1/glance

# The SQL mode to be used for MySQL sessions. This option,
# including the default, overrides any server-set SQL mode. To
Expand Down
2 changes: 1 addition & 1 deletion roles/glance/templates/glance-registry.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ backend = sqlalchemy
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
connection = mysql://glance:{{ openstack_glance_pass }}@{{ VIP_DB }}/glance
connection = mysql://glance:{{ openstack_glance_pass }}@127.0.0.1/glance

# The SQL mode to be used for MySQL sessions. This option,
# including the default, overrides any server-set SQL mode. To
Expand Down
2 changes: 1 addition & 1 deletion roles/haproxy/tasks/configure_haproxy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,5 @@
notify: Restart haproxy
# TODO pkill the haproxy process and start haproxy, because restart haproxy has a muti process bug.
#- name: Kill haproxy
# shell: pkill haproxy
# command: pkill haproxy
# notify: Start haproxy
2 changes: 1 addition & 1 deletion roles/keystone/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@
when: "host == 'controller01'"
- include: create_the_service_entity_and_API_endpoint.yml
when: "host == 'controller01'"
- include: create_openstack_client_environment_scripts.yml
- include: create_openstack_client_environment_scripts.yml
2 changes: 1 addition & 1 deletion roles/keystone/templates/keystone.conf.j2
Original file line number Diff line number Diff line change
Expand Up @@ -630,7 +630,7 @@ log_dir=/var/log/keystone
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
connection=mysql://keystone:{{ keystone_db_password }}@{{ VIP_MGMT }}/keystone
connection=mysql://keystone:{{ keystone_db_password }}@127.0.0.1/keystone

# The SQLAlchemy connection string to use to connect to the
# slave database. (string value)
Expand Down

0 comments on commit bca81ed

Please sign in to comment.