Automated deployment of OpenContrail with SaltStack

Note: This is a post taken from tcpcloud blog. Click here for the original post.

This week, tcp cloud® launched a new version of TCP Virtual Private Cloud built on OpenContrail as SDN / NFV Controller. We decided for this network setupneutron plugin after spending more than 6 month of development and 2 month of production environment running vanilla OpenStack implementation with High Availability Neutron L3 Agent. We have worked at installation, development, testing and measuring ofOpenContrail networking since OpenStack Summit at Atlanta, where we saw it for the first time. Our main contribution to this project was development of automated deployment for complete OpenStack infrastrucute with OpenContrail as neutron plugin. 

OpenContrail is released under an Apache 2.0 License by Juniper. To understand how it works you should read the slides and overview at opencontrail.org, as well as the documentation available at juniper.net or the book Day One Understanding OpenContrail.

This will be a mostly technical, but I will not talk about what OpenContrail actually does or how to deploy whole OpenStack or how powerful this solution really is. I am going to explain how to use our salt-opencontrail-formula for integration with an arbitrary OpenStack. 

I have to say that it was a little bit challenge for us, because it is not common vendor plugin that we are used to work with and it is not actually part of the official OpenStack community repositories? (maybe in Juno release Contrail will be included as official vendor plugin). As John Deatherage wrote at his blog, there are several ways how you can deploy Contrail, but if you want to deploy it as production system youshould not use any of these. The reason is that the default deployment is done by Fabric and installs whole Openstack environment along the Contrail services. You do not have possibility to customise the OpenStack part and you do not know how it is actually configured. There is no official documentation for manual deployment except GitHub. OpenStack is modular system with various architectures and components. Therefore you should have possibility todefine properties. This is the reason why we refactored the installation code from Fabric and created a new SaltStack formula.

PREPARE ENVIRONMENT

We use RedHat based OpenStack Havana running on CentOS 6.4 and OpenContrail 1.05. Download rpm package from juniper website and create local YUM repository for all nodes in infrastructure. Do not use RDO OpenStack Havana, because OpenContrail contains modified nova and neutron packages. Basically you can prepare these control nodes with CentOS 6.4:

  • OpenStack Controller – Keystone, Glance, Cinder, Nova, Dashboard, Swift, Heat, Ceilometer, etc.
  • OpenContrail Controller – Control node, Config node, Collector node, Neutron.
  • OpenStack Compute node – Nova compute, OpenContrail compute (vRouter).

Default Fabric automation runs setup_all with following set of tasks:

  • Executing task ‘setup_all’
  • Executing task ‘bash_autocomplete_systemd’
  • Executing task ‘setup_database’
  • Executing task ‘setup_openstack’
  • Executing task ‘setup_cfgm’
  • Executing task ‘setup_control’
  • Executing task ‘setup_collector’
  • Executing task ‘setup_webui’
  • Executing task ‘setup_vrouter’
  • Executing task ‘prov_control_bgp’
  • Executing task ‘prov_external_bgp’
  • Executing task ‘compute_reboot’

Against them we have created these sls roles, which can be use independently and even redundantly for high availability (e.g. 2 control nodes):

  • common.sls – common configuration for all roles.
  • database.sls – configuration of the OpenContrail database.
  • config.sls – install and configure config node.
  • control.sls – install and configure control node.
  • collector.sls – install and configure collector node.
  • compute.sls – install and configure compute node.
  • web.sls – install modules into OpenStack Dashboard (Horizon).

I cover just formula and installation for OpenContrail, but in tcp cloud we have created formulas for whole OpenStack environment including keystone, mysql, glance, cinder, nova, heat with modular neutron, cinder and nova plugins. These formulas might be released in the future. They are included inside of product TCP Private Cloud. For this purpose I describe manual changes inside of OpenStack configuration files that are different from http://docs.openstack.org

DEPLOY OPENSTACK

You should install OpenStack environment (Contrail repository only) and customized these specific projects:

NOVA

set libvirt vif driver in /etc/nova.conf at Controller and Compute node.

libvirt_vif_driver = nova_contrail_vif.contrailvif.VRouterVIFDriver 

NEUTRON

Install openstack-neutron-server and openstack-neutron-contrail at Controller node. There must be configured: 

neutron.conf

core_plugin = neutron.plugins.juniper.contrail.contrailplugin.ContrailPlugin
 /etc/neutron/plugins/opencontrail/ContrailPlugin.ini 
 [APISERVER]
 api_server_ip = 10.0.106.34
 api_server_port = 8082
 multi_tenancy = False
 [KEYSTONE]
 admin_user=admin
 admin_password=pswd
 admin_tenant_name=admin

HORIZON

Install contrail-openstack-dashboard and verify these modules /etc/openstack-dashboard/local_settings 

HORIZON_CONFIG['customization_module'] = 'contrail_openstack_dashboard.overrides'

PREPARE OPENCONTRAIL CONTROLLER

Because of standalone solution, you have to install basic rabbitmq-server and cassandra with default settings. Next there must be haproxy enabling options for HA. These are separate saltstack formulas, so manually configure haproxy.conf

CREATE PILLAR FOR ROLES

Now you have to create pillar with definition for all roles located at OpenContrail controller. As you can see, some definitions are repeated, because roles are independent and can run on different servers and scale up.

 opencontrail:
 common:
 identity:
 engine: keystone
 host: 10.0.106.30
 port: 35357
 token: pwd
 password: pwd
 network:
 engine: neutron
 host: 10.0.106.34
 port: 9696
 config:
 enabled: true
 id: 1
 network:
 engine: neutron
 host: 10.0.106.34
 port: 9696
 bind:
 address: 10.0.106.34
 database:
 members:
 - host: 10.0.106.34
 port: 9160
 cache:
 host: 10.0.106.34
 identity:
 engine: keystone
 region: RegionOne
 host: 10.0.106.30
 port: 35357
 user: admin
 password: pwd
 token: pwd
 tenant: admin
 members:
 - host: 10.0.106.34
 id: 1
 control:
 enabled: true
 bind:
 address: 10.0.106.34
 master:
 host: 10.0.106.34
 members:
 - host: 10.0.106.34
 id: 1
 collector:
 enabled: true
 bind:
 address: 10.0.106.34
 master:
 host: 10.0.106.34
 database:
 members:
 - host: 10.0.106.34
 port: 9160
 database:
 enabled: true
 name: 'Contrail'
 original_token: 0
 data_dirs:
 - /home/cassandra
 bind:
 host: 10.0.106.34
 port: 9042
 rpc_port: 9160
 members:
 - 10.0.106.34
 web:
 enabled: True
 bind:
 address: 10.0.106.34
 master:
 host: 10.0.106.34
 cache:
 engine: redis
 host: 10.0.106.34
 port: 6379
 members:
 - host: 10.0.106.34
 id: 1
 identity:
 engine: keystone
 host: 10.0.106.30
 port: 35357
 user: admin
 password: pwd
 token: pwd
 tenant: admin

OpenStack Controller:

 opencontrail:
 common:
 identity:
 engine: keystone
 host: 10.0.106.30
 port: 35357
 token: pwd
 password: pwd
 network:
 engine: neutron
 host: 10.0.106.34
 port: 9696

OpenStack Compute

 opencontrail:
 common:
 identity:
 engine: keystone
 host: 10.0.106.30
 port: 35357
 token: pwd
 password: pwd
 network:
 engine: neutron
 host: 10.0.106.34
 port: 9697
 compute:
 enabled: True
 hostname: compute1
 discovery:
 host: 10.0.106.34
 interface:
 dev: bond0
 default_pmac: a0:36:9f:02:95:2c
 address: 10.0.106.102
 gateway: 10.0.106.1
 mask: 24
 dns: 8.8.8.8
 mtu: 9000
 control_instances: 1

Deploy OpenContrail

Run this command at all infrastructure nodes:

salt-call state.sls opencontrail

Deploy Compute nodes

Add virtual router

python /etc/contrail/provision_vrouter.py --host_name compute1 --host_ip 10.0.106.106
 --api_server_ip 10.0.106.34 --oper add --admin_user admin --admin_password pwd --admin_tenant_name admin

ADD CONTROL BGP

Finally you have to add control BGP through python script stored in /etc/contrail at OpenContrail Controller:

python /etc/contrail/provision_control.py --api_server_ip 10.0.106.34 --api_server_port
 8082 --host_name network1.contrail.domain.com --host_ip 10.0.106.34 --router_asn 64512

Now you should have OpenContrail deployed. Run your instance first instance and try the amazing network performance. For whole automation contact us for Private Cloud

Jakub Pavlík – @JakubPav
TCP Cloud platform engineer