OpenContrail – Quick Start Guide

Software Installation guide for OpenContrail.

(for version 1.1 integration with Openstack)

1       Overview

OpenContrail is a network virtualization platform for the cloud. It has been designed with scale out in mind. OpenContrail software is divided in functional groups called nodes. A ‘Node’ in OpenContrail is a logical construct and does not necessarily imply a physical device/server/VM. However one could deploy a Node in its own physical server or VM. Multiple instances of each Node may be deployed for high availability and redundancy. To automate the process of provisioning and installation of OpenContrail Nodes, relevant provisioning and installation scripts are provided.

The following is a list of Nodes in OpenContrail:

  • DB node: This node runs Cassandra database and Zookeeper.
  • Configuration Node: This node runs Neutron server, configuration API server, IF-MAP server, discovery server and configuration related services.
  • Analytics Node: This node runs the analytics data collector, operational and analytics API server and Query engine.
  • Web UI: This node runs the web-server and web job-server
  • Control node: This node runs BGP speaker, DNS server and named
  • Compute node: This node runs the vRouter a kernel loadable module and a user space vRouter agent, along with Openstack compute services.

Download Source Code:
GitHub Logo

Download Release Binaries:
Launchpad logo

Learn More:
Deep Dive Image

Architecture Document Image

1.1  Architecture diagram

opencontrail_architecture

1.2    Physical Network requirements

physical_network_requirements

In a typical data-center deployment for OpenContrail, every server connects to three networks

  1. IP-fabric: This is the physical underlay network on top of which the virtual overlay network will be implemented. All the control and data traffic goes over this network. Typically, IP-fabric is isolated from all other networks for security reasons.
  2. Management: This network is used to manage the servers themselves. Usually this is the network on which SSH to the servers is allowed. Since OpenContrail software does not use the Management network, it is optional.
  3. IPMI network: IPMI management interface of a server is connected to this network. Like the Management network, OpenContrail software does not use this network either.

Jump Hosts are set of special servers that connect to External network, Management network and IPMI network. Typically server management may also reside on these servers.

Gateway is device that sits on IP-fabric and external network. It provides mechanism to connect external networks to the virtual networks. Usually a Gateway is a BGP L3VPN capable hardware device like a Juniper MX router or a Cisco ASR 9000. OpenContrail also provides ability to configure a server as software gateway (https://github.com/Juniper/contrail-controller/wiki/Simple-Gateway).

1.3    OpenContrail packages and how to get them?

Package naming convention (examples are w.r.t. Ubuntu)

Node package has following naming convention.

contrail-<node>_<release>-<build number>_<arch>.deb

Example contrail-analytics_1.10-18_amd64.deb

Wrapper package, which will bring in all relevant packages for a given deployment, has following naming convention

contrail-<orchestration>-<node>_<release>-<build-number>_all.deb

 Example contrail-openstack-analytics_1.10-18_all.deb

 For convenience of demo a wrapper package, which contains all the contrail packages and dependent packages (including Openstack), is built

contrail-install-packages_<release>-<build-number>~<orchestration release>_all.deb

 Example contrail-install-packages_1.10-18~icehouse_all.deb

 Installation of above package will create a contrail repository locally on the server.

1.3.1 Getting the packages.

There are three ways to get contrail packages

Restricted Content

To access the Contrail packages, you need to login as a registered user.  Registration is free.
Login/Register

2 Quick start install for demo and POC

2.1  Single server all in one setup (including Openstack)

Start with server with at least 4 core x86 machine, 32GB ram and greater than 128GB disk. Install base 12.04.3 Ubuntu.

http://old-releases.ubuntu.com/releases/12.04.1/ubuntu-12.04.3-server-amd64.iso

Server needs to have IPv4 address statically configured and hostname that can resolve.

Now you can follow method 3 of “Getting the packages”.

Restricted Content

To access the Contrail packages, you need to login as a registered user.  Registration is free.
Login/Register

This will install utilities and tar ball of all packages needed to install Openstack + OpenContrail in /opt/contrail/

Now we need to create repo local for all the installation to work. Following steps will modify repo sources list (/etc/apt/sources.list) for apt-get.

Now the repo is created and contrail installation utilities are installed. Now the Openstack + OpenContrail install can be done using “fabric”. Fabric scripts are provided as part of installation of above package. The fabric scripts run off a cluster definition file. This file needs to be created in /opt/contrail/utils/fabfile/testbeds/testbed.py

 cd /opt/contrail/utils/fabfile/testbeds/
 cp testbed_singlebox_example.py  testbed.py

Edit testbed.py

  1. Replace “root@1.1.1.1” by “root@<server ip>”
  2. Replace “secret” by “<root password>
  3. Replace “secret123” by “<Openstack admin password>”

Note: the stock kernel version as part of 12.04.3 or 12.04.4 LTS is older than 3.13.0-34. In such cases, the following Fabric task can be used to upgrade the kernel version to 3.13.0-34 in all nodes.

cd /opt/contrail/utils; fab upgrade_kernel_all  #This will reboot the server

Note: Below script assumes that interface configuration is in /etc/network/interfaces and not in /etc/network/interfaces.d/<name>.cfg. So please move the interface configuration from this file to /etc/network/interfaces and delete /etc/network/interfaces.d/<name>.cfg. So for e.g. if eth0’s ip is specified in testbed.py and there exists /etc/network/interfaces.d/eth0.cfg then mv /etc/network/interfaces.d/eth0.cfg ~/eth0.cfg.bak

If Openstack Icehouse release is being installed on Ubuntu 12.04(precise), execute the following command

apt-get install nova-compute-libvirt=1:2014.1-0ubuntu1~cloud0

If Openstack Icehouse release is being installed on Ubuntu 14.04(trusty), execute the following command

apt-get install nodejs=0.8.15-1contrail1

The next step is to install the packages and provision the cluster.

 cd /opt/contrail/utils
 fab install_contrail
 fab setup_all  #this will reboot the machine

Now you have fully functional Openstack + OpenContrail system

You can access horizon web UI http://server-ip/horizon

You can access OpenContrail web UI http://server-ip:8080

2.2  Multiple servers setup POC (including Openstack)

This description will assume there are 5 servers (host1, host2, …, host5). We will deploy system such that there are three OpenContrail controllers and one Openstack controller. Two compute nodes. Each OpenContrail controller will have all the contrail nodes except compute.

host1: OpenContrail-controller, Openstack-controller
host2: OpenContrail-controller
host3: OpenContrail-controller
host4: Compute-node
host5: Simple-gateway (vgw)

Lets assume following IP addresses

host1: 1.1.1.1
host2: 1.1.1.2
host3: 1.1.1.3
host4: 1.1.1.4
host5: 1.1.1.5

Lets assume following hostnames

host1: a0s1
host2: a0s2
host3: a0s3
host4: a0s4
host5: a0s5

Lets assume root password for these servers is “secret”

Note: Servers don’t have to be on same (V)LAN. They can be anywhere on the ipv4 network. Simple-gateway needs second interface on the external network. Subnet for Floating IP pool should be have simple-gateway as next-hop on the default router of the external network. You can skip the simple gateway configuration.

Install each server with base 12.04.3 Ubuntu.

http://old-releases.ubuntu.com/releases/12.04.1/ubuntu-12.04.3-server-amd64.iso

Server needs to have IPv4 address statically configured and hostname that can resolve.

On all hosts you can follow method 3 of “Getting the Packages”

Restricted Content

To access the Contrail packages, you need to login as a registered user.  Registration is free.
Login/Register

This will enable all the hosts to access the repositories.

Next, execute the following command on host1.

curl –L http://www.opencontrail.org/ubuntu-repo/install-tools-key-and-prep | sudo HOSTS=”1.1.1.1 1.1.1.2 1.1.1.3 1.1.1.4 1.1.1.5” sh

This will install provisioning utilities needed to bring up Openstack + OpenContrail in /opt/contrail/. It will also generate a testbed specific ssh-key and add in ~/.ssh/authorized_hosts in all the hosts.

Now the OpenContrail + Openstack install can be done using ”fabric”. Fabric scripts are provided as part of installation of above package. The fabric scripts run off a cluster definition file. This file needs to be created in /opt/contrail/utils/fabfile/testbeds/testbed.py

 cd /opt/contrail/utils/fabfile/testbeds/
 cp testbed_multibox_example.py testbed.py

Note: This example testbed file is defined for 10 hosts but it does not matter. We can modify it appropriately.

Edit testbed.py to include following

from fabric.api import env
#Management ip addresses of hosts in the cluster
 host1 = 'root@1.1.1.1'
 host2 = 'root@1.1.1.2'
 host3 = 'root@1.1.1.3'
 host4 = 'root@1.1.1.4'
 host5 = 'root@1.1.1.5'
 #External routers if any
 #for eg.
 ext_routers = []
 #Autonomous system number
 router_asn = 64512
 #Host from which the fab commands are triggered to install and provision
 host_build = 'root@1.1.1.1'
 #Role definition of the hosts.
 env.roledefs = {
 'all': [host1, host2, host3, host4, host5],
 'database': [host1, host2, host3],
 'cfgm': [host1, host2, host3],
 'control': [host1, host2, host3],
 'collector': [host1, host2, host3],
 'webui': [host1],
 'openstack': [host1],
 'compute': [host4],
 'vgw': [host5],
 'build': [host_build],
 }
 env.hostnames = {
 'all': ['a0s1', 'a0s2', 'a0s3','a0s4', 'a0s5']
 }
 #Openstack admin password
 env.openstack_admin_password = 'secret123'
 env.password = 'secret'
 #Passwords of each host
 env.passwords = {
 host1: 'secret',
 host2: 'secret',
 host3: 'secret',
 host4: 'secret',
 host5: 'secret',
 host_build: 'secret',
 }
 #For reimage purpose
 env.ostypes = {
 host1: 'ubuntu',
 host2: 'ubuntu',
 host3: 'ubuntu',
 host4: 'ubuntu',
 host5: 'ubuntu',
 }
 env.vgw = {
 host5: {
 'vgw1': {
 'vn':'default-domain:admin:public:public',
 'ipam-subnets': [‘<public-subnet>']
 'gateway-routes': [‘0/0’]
 },
 },
 }

Note: the stock kernel version as part of 12.04.3 or 12.04.4 LTS is older than 3.13.0-34. In such cases, the following Fabric task can be used to upgrade the kernel version to 3.13.0-34 in all nodes.

cd /opt/contrail/utils; fab upgrade_kernel_all  #This will reboot the server

Note: Below script assumes that interface config is /etc/network/interfaces and not in /etc/network/interfaces.d/<name>.cfg. So please move the config from this file to /etc/network/interfaces and delete /etc/network/interfaces.d/<name>.cfg

Finally to install the relevant packages and provision the system execute the following commands

If Openstack Icehouse release is being installed on Ubuntu 12.04(precise), execute the following command

ssh root@1.1.1.4 apt-get –y install nova-compute-libvirt=1:2014.1-0ubuntu1~cloud0
ssh root@1.1.1.5 apt-get –y install nova-compute-libvirt=1:2014.1-0ubuntu1~cloud0

If Openstack Icehouse release is being installed on Ubuntu 14.04(trusty), execute the following command on host1

apt-get install nodejs=0.8.15-1contrail1

The next step is to install the packages and provision the cluster.

 cd /opt/contrail/utils
 fab install_contrail
 fab setup_all                    #this will reboot the compute hosts

Now you have fully functional openstack+OpenContrail system

You can access horizon web ui http://1.1.1.1/horizon

You can access OpenContrail web ui http://1.1.1.1:8080

2.3  Deploying Contrail with existing Openstack

Often users may have an existing Openstack cluster in which Contrail will be used for networking. This description will assume there 6 servers (host1, host2, …, host6). We will deploy system such that there are three OpenContrail controllers, one Openstack controller, one simple-gateway and a compute node. Each OpenContrail controller will have all the contrail nodes except compute.

Note: For the steps below to work, it is assumed that openstack controller is already installed and running. It is also assumed that compute nodes have nova-compute package installed.

host1: OpenContrail-controller
host2: OpenContrail-controller
host3: OpenContrail-controller
host4: Compute-node
host5: Simple-gateway (vgw)
host6: Openstack-controller

Lets assume following ip address

host1: 1.1.1.1
host2: 1.1.1.2
host3: 1.1.1.3
host4: 1.1.1.4
host5: 1.1.1.5
host6: 1.1.1.6

Lets assume following hostnames

host1: a1s01
host2: a1s02
host3: a1s03
host4: a1s04
host5: a1s05
host6: a1s06

Lets assume root password for these servers is “secret”

Note: Servers don’t have to be on same (V)LAN. They can be anywhere on the ipv4 network. Simple-gateway needs second interface on the external network. Subnet for Floating ip pool should be have simple-gateway as next-hop on the default router of the external network. You can skip the simple gateway configuration.

Install each OpenContrail controller server with  base 12.04.3 ubuntu.
http://old-releases.ubuntu.com/releases/12.04.1/ubuntu-12.04.3-server-amd64.iso
Server needs to have IPv4 address statically configured and hostname that can resolve.

On host1-host5 you can follow method 3 of “Getting packages” by doing following on host 1:

Restricted Content

To access the Contrail packages, you need to login as a registered user.  Registration is free.
Login/Register

This will enable the compute and OpenContrail controller hosts to access the repositories.

Next, execute the following command on host1.

curl –L http://www.opencontrail.org/ubuntu-repo/install-tools-key-and-prep | sudo HOSTS=”1.1.1.1 1.1.1.2 1.1.1.3 1.1.1.4 1.1.1.5” sh

This will install provisioning utilities needed to bring up Openstack + OpenContrail in /opt/contrail/. It will also generate a testbed specific ssh-key and add in ~/.ssh/authorized_hosts in all the compute and OpenContrail controllers.

Now the contrail install can be done using “fabric”. Fabric scripts are provided as part of installation of above package. The fabric scripts run off a cluster definition file. This file needs to be created in /opt/contrail/utils/fabfile/testbeds/testbed.py

 cd /opt/contrail/utils/fabfile/testbeds/
 cp testbed_multibox_example.py testbed.py

Note: This example testbed file is defined for 10 hosts but it does not matter. We will modify it appropriately.

edit testbed.py to include following

 from fabric.api import env
 #Management ip addresses of hosts in the cluster
 host1 = 'root@1.1.1.1'
 host2 = 'root@1.1.1.2'
 host3 = 'root@1.1.1.3'
 host4 = 'root@1.1.1.4'
 host5 = 'root@1.1.1.5'
 host6 = 'root@1.1.1.6'
 #External routers if any
 #for eg.
 ext_routers = []
 #Autonomous system number
 router_asn = 64512
 #Host from which the fab commands are triggered to install and provision
 host_build = 'root@1.1.1.1'
 #Role definition of the hosts.
 env.roledefs = {
 'all': [host1, host2, host3, host4, host5, host6],
 'database': [host1, host2, host3],
 'cfgm': [host1, host2, host3],
 'control': [host1, host2, host3],
 'collector': [host1, host2, host3],
 'webui': [host1],
 'openstack': [host6],
 'compute': [host4],
 'vgw': [host5],
 'build': [host_build],
 env.hostnames = {
 'all': ['a0s1', 'a0s2', 'a0s3','a0s4', 'a0s5', 'a0s6']
 }
 #Openstack admin password
 env.openstack_admin_password = 'secret123'
 env.password = 'secret'
 #Passwords of each host
 env.passwords = {
 host1: 'secret',
 host2: 'secret',
 host3: 'secret',
 host4: 'secret',
 host5: 'secret',
 host6: 'secret',
 host_build: 'secret',
 }
 #For reimage purpose
 env.ostypes = {
 host1: 'ubuntu',
 host2: 'ubuntu',
 host3: 'ubuntu',
 host4: 'ubuntu',
 host5: 'ubuntu',
 host6: 'ubuntu',
 }
 env.vgw = {
 host5: {
 'vgw1': {
 'vn':'default-domain:admin:public:public',
 'ipam-subnets': [‘<public-subnet>’]
 'gateway-routes': [‘0/0']
 },
 },
 }
env.openstack = {
'service_token' :<admin-token> #Common service token for for all openstack services
}

Note: the stock kernel version as part of 12.04.3 or 12.04.4 LTS is older than 3.13.0-34. In such cases, the following Fabric task can be used to upgrade the kernel version to 3.13.0-34 in compute node and simple-gateway.

 cd /opt/contrail/utils
 fab upgrade_kernel_node:root@host4 
 fab upgrade_kernel_node:root@host5
 ssh root@host4 reboot
 ssh root@host5 reboot

Note: Below script assumes that interface config is /etc/network/interfaces and not in /etc/network/interfaces.d/<name>.cfg. So please move the config from this file to /etc/network/interfaces and delete /etc/network/interfaces.d/<name>.cfg on all the compute nodes and simple-gateway.

cd /opt/contrail/utils
fab install_without_openstack:manage_nova_compute=’no’
fab setup_without_openstack:manage_nova_compute=’no’      #this will reboot the computer

Now on each compute node execute following to configure libvirtd

sudo echo 'cgroup_device_acl = [' >> /etc/libvirt/qemu.conf
sudo echo '    "/dev/null", "/dev/full", "/dev/zero",' >> /etc/libvirt/qemu.conf
sudo echo '    "/dev/random", "/dev/urandom",' >> /etc/libvirt/qemu.conf
sudo echo '    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",' >> /etc/libvirt/qemu.conf
sudo echo '    "/dev/rtc", "/dev/hpet","/dev/net/tun",' >> /etc/libvirt/qemu.conf
sudo echo ']' >> /etc/libvirt/qemu.conf
service libvirt-bin restart
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://<keystone-ip>:5000/v2.0
service nova-compute restart

On the Openstack controller ensure /etc/nova/nova.conf ‘DEFAULT’ section ‘neutron_url’ key has OpenContrail controller IP address so that it looks like below:

 [DEFAULT]
neutron_url = http://1.1.1.1:9696

Note: Later versions of openstack have this as

 [neutron]
url = http://1.1.1.1:9696

and then restart nova-api service on Openstack controller:

service nova-api restart

Now you have fully functional openstack+OpenContrail system
you can access OpenContrail web ui http://1.1.1.1:8080

Back to Top