This tool can be used to create pre-defined virtual topologies on a single server very easily. The idea is to leverage high end servers and spin up Virtual machines and use them as Bare-metal servers. We can eliminate the need of physical servers, physical switches, physical routers and their painful connections and mis-configurations. The high end servers can be used effectively. As a thumb rule, we can spin about 15 to 20 virtual machines as bare metals in each of the high-end physical servers. VMs can be as fast as bare metals, so the difference between physical and virtual systems are blurred. The tool combines the power of vagrant, virtualbox and python to create virtual topologies, install contrail and provision the cluster. This tool makes the life of developers and testers very easy by doing all these using a simple yaml file. It can also assign Floating IPs to the bare metal instances so that they are accessible through the LAN.
- A high-end Xeon server with about 40 cores, atleast 256GB RAM
- Ubuntu 18.04 OS
git clone https://github.com/Jayesh-Popat/lab-in-a-server.git
cd lab-in-a-server
sudo ./installer.sh
Pull the latest code in the directory
cd <lab-in-a-server directory>
git pull
sudo ./installer.sh
Configuration file should be given as the input for creating vms. The configuration file will have attributes specific to topologies.
create_lab create lab.yml
create_lab list
Lists all the topologies hosted on the machine, the templates and the working directory associated with them.
create_lab list --resources
This command lists the available memory and total memory present in the host machine and memory consumed by each topology.
create_lab show <topology_name>
<topology_name> is the unique name given at the time of creation. Displays all the resources assigned to the virtual machines in the topology.
Deallocate the resources assigned to the virtual machines
create_lab destroy <topology_name>
Retry building entire topology with same resources in case of failure. This command is supposed to be used when topology bring up or contrail installation fails. In case of any changes made to input file used in the creation step, the topology is supposed to be destroyed and recreated.
create_lab rebuild <topology_name>
This command is to be used to turn on the topology when it is powered off.
create_lab poweron <topology_name>
This command is to be used to shutdown the running topology.
create_lab poweroff <topology_name>
- Dev-env
- All-in-one
- Three node setup
- Three node setup with VQFX
- Fabric (CRB leaf + spine)
- Edge compute
- Multi-compute
Every configuration file given at the time of topology creation should have these fields specified
Template name specifies the topology and it should be one among [ devenv, all_in_one, three_node_vqfx, three_node ]
Unique name given for a deployment.
List of public ip address to be assigned to the virtual machines.
management_ip: '10.204.220.30'
netmask: '255.255.255.192'
gateway: '10.204.220.62'
When True, assigns private ip address accessible from host machine, as management ip. It is FALSE by default.
input file - dev_env.yml
template : 'devenv'
name : dev
internal_network: False
branch: R1910
management_ip: 10.204.220.30
netmask: 255.255.255.192
gateway: 10.204.220.62
The branch which is checked out for creating dev-env from contrail-dev-env repository.
input file - aio.yml
template: all_in_one
name: aio1
internal_network: True
contrail_version: "2005.61"
#management_ip:
#netmask:
#gateway
openstack_version: queens
registry: bng-artifactory
contrail_command: True
The virtual machines are provisioned with given contrail version. This value is supposed to be a string and should be within "" when the version tag has only numbers.
contrail_version: "1910.30"
The value for this field should be one among [bng-artifactory, svl-artifactory, cirepo, nodei40, hub]. The images are pulled from the registry specified. bng-artifactory is the default registry.
Openstack version is "queens" by default.
If this field is specified, then contrail-command is installed and one of the ip addresses from the management_ip list will be assigned to the node.
The template spins up 1 controller node and 2 compute nodes connected to VQFX box.
input file - three_node_vqfx.yml
template : 'three_node_vqfx'
name : tnv-f
additional_control: 1
additional_compute: 1
dpdk_computes: 1
contrail_version: "2005.61"
registry: bng-artifactory
management_ip: ['10.204.220.31', '10.204.220.32', '10.204.220.33', '10.204.220.30', '10.204.220.34', '10.204.220.35']
netmask: 255.255.255.192
gateway: 10.204.220.62
openstack_version: queens
kolla_external_vip_address: 10.204.220.36
The number of additional control nodes to be provisioned. Zero by default.
The number of additional compute nodes to be provisioned. Zero by default.
The number of dpdk computes to be provisioned. The default number is zero. The value for this field should always be <= additional_computes + 2.
This field is an ip address in the management subnet. This field is to be specified when there are multiple controllers. This field need not be specified when the internal network field is True. The 8143 port is accessible on this interface. Horizon will be accessible on the management network by means of the kolla_external_vip_address.
The template spins up 1 controller node and 2 compute nodes.
input file - three_node.yml
template : three_node
name : tn-f
additional_compute: 1
additional_controller: 1
internal_network: True
contrail_version: "2005.61"
openstack_version: queens
dpdk_computes: 1
registry: bng-artifactory
contrail_command: True
If the contrail_version is not specified during topology creation, the virtual machines are still up without contrail.
Creates a tunnel to host machine using port forwarding.
https://github.com/kirankn80/cfm-vagrant/blob/master/docs/FoxyProxy-Chrome-Setup.md
https://github.com/kirankn80/cfm-vagrant/blob/master/docs/FoxyProxy-FireFox-Setup.md