One or more target hosts needs to be prepared so that Openstack cloud will be deployed on them. It typically involves installing and configuring operating systems (e.g. Ubuntu/Debian/CentOs/Rocky Linux etc.) and then configure SSH keys. Here, we have to copy the contents of the public key file of the deployment host to the /root/.ssh/authorized_keys
file on each target host. Then configure the storage and network.
Configure Ubuntu
- Update package source lists
# apt update
- Upgrade the system packages and kernel:
# apt dist-upgrade
- Install additional software packages:
# apt install bridge-utils debootstrap openssh-server tcpdump vlan python3
- Install the kernel extra package if you have one for your kernel version
# apt install linux-modules-extra-$(uname -r)
Configure SSH keys
Ansible uses SSH to connect the deployment host and target hosts. Copy the contents of the public key file on the deployment host to the /root/.ssh/authorized_keys
file on each target host. Test public key authentication from the deployment host to each target host by using SSH to connect to the target host from the deployment host.
Configure storage
Here, LVM is used to single device into multiple logical volumes that appear as a physical storage device to the operating system. Cinder-volumes, lxc containers then can use logical volume groups as storage. If the lxc
volume group does not exist, containers are automatically installed on the file system under /var/lib/lxc
by default.
Configure network
OpenStack-Ansible uses bridges to connect physical and logical network interfaces on the host to virtual network interfaces within containers.
- br-storage will be needed on every compute nodes and storage nodes.
- br-vxlan will be needed on every compute nodes and network nodes
- br-mgmt will be needed on every node.
LXC internal host bridge (lxcbr0) is required for LXC, but OpenStack-Ansible configures it automatically. It provides external (typically Internet) connectivity to containers with dnsmasq (DHCP/DNS) + NAT. It attaches to eth0
in each container.
Container management (br-mgmt) bridge provides management of and communication between the infrastructure and OpenStack services. The bridge attaches to a physical or logical interface, typically a bond0
VLAN subinterface. It attaches to eth1 in each container.
Storage network (br-storage) bridge provides segregated access to Block Storage devices between OpenStack services and Block Storage devices. The bridge attaches to a physical or logical interface, typically a bond0
VLAN subinterface. It also attaches to eth2
in each associated container.
Openstack networking tunnel (br-vxlan) required if the environment is configured to allow projects to create virtual networks using VXLAN. It provides the interface for encapsulated virtual (VXLAN) tunnel network traffic. Note that br-vxlan
is not required to be a bridge at all, a physical interface or a bond VLAN subinterface can be used directly and will be more efficient.