-
Notifications
You must be signed in to change notification settings - Fork 40
Deploy HW
Checklist:
- For single-node VM deployment, ensure the HW(s) is/are accessible over ssh.
- Do you see the devices on execution of this command:
$ lsblk -S|grep SEAGATE|wc -l
If the expect number of devices is not found, execute the following commands and retry:$ rescan-scsi-bus.sh
- Does your setup have valid hostnames, are the hostnames accessible:
$ ping <hostname>
- Do you have IPs' assigned to all NICs?
$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:c8:91:ec brd ff:ff:ff:ff:ff:ff inet 10.xxx.yyy.zzz/20 brd 10.xxx.255.255 scope global noprefixroute dynamic eno1 valid_lft 342949sec preferred_lft 342949sec inet6 fe80::ae1f:6bff:fec8:91ec/64 scope link valid_lft forever preferred_lft forever 3: enp175s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 98:03:9b:06:a6:4c brd ff:ff:ff:ff:ff:ff inet 192.168.lll.mmm/19 brd 192.168.nnn.255 scope global dynamic enp175s0f0 valid_lft 27277sec preferred_lft 27277sec inet6 fe80::9a03:9bff:fe06:a64c/64 scope link valid_lft forever preferred_lft forever 4: enp175s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 98:03:9b:06:a6:4d brd ff:ff:ff:ff:ff:ff inet 192.168.ooo.ppp/19 brd 192.168.qqq.255 scope global dynamic enp175s0f1 valid_lft 32354sec preferred_lft 32354sec inet6 fe80::9a03:9bff:fe06:a64d/64 scope link valid_lft forever preferred_lft forever 5: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:c8:91:ed brd ff:ff:ff:ff:ff:ff 6: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether c2:01:e2:4e:b9:27 brd ff:ff:ff:ff:ff:ff inet 10.0.0.4/24 scope global tap0 valid_lft forever preferred_lft forever inet6 fe80::c001:e2ff:fe4e:b927/64 scope link valid_lft forever preferred_lft forever
- Identify primary node and run below commands on primary node
NOTE: For single-node, the node itself is treated as primary node.
-
Set root user password on all nodes:
sudo passwd root
-
Download ISOs and shell-scripts
NOTE: Contact Cortx RE team for latest ISO.mkdir /opt/isos
NOTE: If you are outside Seagate corporate network, download these files manually to
/opt/isos
directory and skip to Prepare cortx-prvsnr API step.
Get details for ISO download URL from Cortx RE team. The URL should have the following files:- cortx-2.0.0-*-single.iso
- cortx-os-1.0.0-*.iso or centos-7.8-minimal.iso
- cortx-prep.sh
# Set source URL # It should have the following: cortx-2.0.0-*-single.iso, cortx-os-2.0.0-*.iso CORTX_RELEASE_REPO=<URL to Cortx ISO hosting> # Download Single ISO pushd /opt/isos SINGLE_ISO=$(curl -s ${CORTX_RELEASE_REPO}/iso/ | sed 's/<\/*[^>]*>//g' | cut -f1 -d' ' | grep 'single.iso') curl -O ${CORTX_RELEASE_REPO}/iso/${SINGLE_ISO} # Download OS ISO OS_ISO=$(curl -s ${CORTX_RELEASE_REPO}/iso/ | sed 's/<\/*[^>]*>//g' | cut -f1 -d' '|grep "cortx-os") curl -O ${CORTX_RELEASE_REPO}/iso/${OS_ISO} # Download cortx_prep script curl -O https://raw.githubusercontent.com/Seagate/cortx-prvsnr/stable/cli/src/cortx_prep.sh popd
-
Prepare cortx-prvsnr API
pushd /opt/isos # Execute cortx-prep script chmod +x /opt/isos/cortx-prep.sh sh /opt/isos/cortx-prep.sh popd
-
Verify provisioner version (0.36.0 and above)
provisioner --version
-
Identify storage volumes:
cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf systemctl restart multipathd device_list=$(multipath -ll|grep mpath|sort -k2|cut -d' ' -f1|sed 's|mpath|/dev/disk/by-id/dm-name-mpath|g'|paste -s -d, -) # Metadata device echo ${device_list%%,*} # Data device echo ${device_list#*,}
-
Create config.ini file to some location:
IMPORTANT NOTE: Please check every details in this file correctly according to your node.
Verify interface names are correct as per your nodeUpdate required details in
~/config.ini
use sample config.ini
Use storage volumes from commands above to fill in values for storage.cvg.0.data_devices and storage.cvg.0.data_devicesvi ~/config.ini
[cluster] cluster_ip= mgmt_vip= [srvnode_default] network.data.private_interfaces=enp175s0f1 network.data.public_interfaces=enp175s0f0 network.mgmt.interfaces=eno1 bmc.user=ADMIN bmc.secret=bmcADMIN! storage.cvg.0.data_devices=/dev/disk/by-id/dm-name-mpathc,/dev/disk/by-id/dm-name-mpathd,/dev/disk/by-id/dm-name-mpathe,/dev/disk/by-id/dm-name-mpathf,/dev/disk/by-id/dm-name-mpathg,/dev/disk/by-id/dm-name-mpathh,/dev/disk/by-id/dm-name-mpathi,/dev/disk/by-id/dm-name-mpathj,/dev/disk/by-id/dm-name-mpathk storage.cvg.0.metadata_devices=/dev/disk/by-id/dm-name-mpathb [srvnode-1] hostname=srvnode-1.localdomain roles=primary,openldap_server [enclosure_default] type=RBOD controller.primary.ip=10.0.0.2 controller.secondary.ip=10.0.0.3 controller.user=manage controller.secret= [enclosure-1]
[cluster] cluster_ip= mgmt_vip= [srvnode_default] network.data.private_interfaces=enp175s0f1 network.data.public_interfaces=enp175s0f0 network.mgmt.interfaces=eno1 bmc.user=ADMIN bmc.secret=bmcADMIN! [srvnode-1] hostname=srvnode-1.localdomain roles=primary,openldap_server storage.cvg.0.data_devices=/dev/disk/by-id/dm-name-mpathc,/dev/disk/by-id/dm-name-mpathd,/dev/disk/by-id/dm-name-mpathe,/dev/disk/by-id/dm-name-mpathf,/dev/disk/by-id/dm-name-mpathg,/dev/disk/by-id/dm-name-mpathh,/dev/disk/by-id/dm-name-mpathi,/dev/disk/by-id/dm-name-mpathj,/dev/disk/by-id/dm-name-mpathk storage.cvg.0.metadata_devices=/dev/disk/by-id/dm-name-mpathb [srvnode-2] hostname=srvnode-2.localdomain roles=primary,openldap_server storage.cvg.0.data_devices=/dev/disk/by-id/dm-name-mpathc,/dev/disk/by-id/dm-name-mpathd,/dev/disk/by-id/dm-name-mpathe,/dev/disk/by-id/dm-name-mpathf,/dev/disk/by-id/dm-name-mpathg,/dev/disk/by-id/dm-name-mpathh,/dev/disk/by-id/dm-name-mpathi,/dev/disk/by-id/dm-name-mpathj,/dev/disk/by-id/dm-name-mpathk storage.cvg.0.metadata_devices=/dev/disk/by-id/dm-name-mpathb [srvnode-3] hostname=srvnode-3.localdomain roles=primary,openldap_server storage.cvg.0.data_devices=/dev/disk/by-id/dm-name-mpathc,/dev/disk/by-id/dm-name-mpathd,/dev/disk/by-id/dm-name-mpathe,/dev/disk/by-id/dm-name-mpathf,/dev/disk/by-id/dm-name-mpathg,/dev/disk/by-id/dm-name-mpathh,/dev/disk/by-id/dm-name-mpathi,/dev/disk/by-id/dm-name-mpathj,/dev/disk/by-id/dm-name-mpathk storage.cvg.0.metadata_devices=/dev/disk/by-id/dm-name-mpathb [enclosure_default] type=RBOD controller.primary.ip=10.0.0.2 controller.secondary.ip=10.0.0.3 controller.user=manage controller.secret= [enclosure-1] [enclosure-2] [enclosure-3]
NOTE :
private_ip, bmc_secret, bmc_user
should be None for VM.
Manual deployment of HW consists of following steps from Auto-Deploy, which could be individually executed:
NOTE: Ensure VM Preparation for Deployment has been addressed successfully before proceeding
-
Bootstrap HW(s): Run setup_provisioner provisioner cli command:
NOTE: Run this command as part of temporary HW patch:
sed -i "s|'components.system.storage.enclosure_id',|#'components.system.storage.enclosure_id',|g" /usr/lib/python3.6/site-packages/provisioner/commands/setup_provisioner.py
If using remote hosted repos:
provisioner setup_provisioner \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \ --source iso --config-path ~/config.ini \ --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \ srvnode-1:$(hostname -f)
If using remote hosted repos:
provisioner setup_provisioner \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \ --ha --source iso --config-path ~/config.ini \ --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \ srvnode-1:<fqdn:primary_hostname> \ srvnode-2:<fqdn:secondary_hostname> \ srvnode-3:<fqdn:secondary_hostname>
Example:
provisioner setup_provisioner \ --logfile --logfile-filename /var/log/seagate/provisioner/setup.log \ --ha --source iso --config-path ~/config.ini \ --iso-cortx ${SINGLE_ISO} --iso-os ${OS_ISO} \ srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain
NOTE:
- This command will ask for each node's root password during initial cluster setup.
This is one time activity required to setup passwordless ssh across nodes.
- This command will ask for each node's root password during initial cluster setup.
-
Update pillar and export pillar data for confstore:
provisioner configure_setup /root/config.ini <number of nodes in cluster> salt-call state.apply components.system.config.pillar_encrypt salt-call state.apply components.system.storage.enclosure_id salt-call state.apply components.system.config.sync_salt salt-call state.apply components.provisioner.config.generate_cluster_pillar provisioner confstore_export
-
Bootstrap Validation Once deployment is bootstrapped (auto_deploy or setup_provisioner) command is executed successfully, verify salt master setup on both nodes (setup verification checklist)
salt '*' test.ping salt "*" service.stop puppet salt "*" service.disable puppet salt '*' pillar.get release salt '*' grains.get node_id salt '*' grains.get machine_id salt '*' grains.get enclosure_id salt '*' grains.get roles
-
Deployment Based On Component Groups:
If provisioner setup is completed and you want to deploy in stages based on component groupNOTE: At any stage, if there is a failure, it is advised to run destroy for that particular group. For help on destroy commands, refer to https://github.com/Seagate/cortx-prvsnr/wiki/Teardown-Node(s)#targeted-teardown
-
System component group
Single Nodeprovisioner deploy --setup-type single --states system
Multi Node
provisioner deploy --states system
-
Prereq component group
Single Nodeprovisioner deploy --setup-type single --states prereq
Multi Node
provisioner deploy --states prereq
- Utils component
Single Node
Multi Node
provisioner deploy --setup-type single --states utils
provisioner deploy --setup-type 3_node --states utils
- IO path component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states iopath
provisioner deploy --states iopath
- Control path component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states controlpath
provisioner deploy --states controlpath
- Ha path component group
Single NodeMulti Nodeprovisioner deploy --setup-type single --states ha
provisioner deploy --states ha
-
Execute the following command on primary node to start the cluster:
cortx cluster start
-
Verify Cortx cluster status:
hctl status
-