RDO Openstack & OpenContrail Integration

Contents

1. RDO Openstack and Opencontrail Integration

1.1 Topology
1.2 Pre-requisites
1.3 CentOS and Redhat versions
1.4 Kernel version

2. RDO Openstack

2.1 Repo settings
2.2 Installation
2.3 Provisioning
2.4 Verification

3. OpenContrail

3.1 Repo settings
3.2 Installation
3.3 Provisioning
3.4 Vrouter setup
3.5 Vrouter, control and metadata Provision
3.6 Verification

4. Integration touch points

4.1 Keystone endpoint update
4.2 Opencontrail Neutron configuration
4.3 Opencontrail API configuration
4.4 Nova configuration

5. Conclusion

1   RDO Openstack and Opencontrail Integration

RDO is a community of people using and deploying Openstack on Red Hat enterprise Linux, Fedora and distributions derived from these such as CentOS, Scientific Linux and others.

OpenContrail is an extensible system that can be used for multiple networking use cases among which the primary ones are:

  1. Cloud Networking. (IaaS, SaaS and VPC)
  2. Network Function Virtualization. (VAS and edge service offering with extensible service chains)

RDO Openstack and OpenContrail integration would allow cloud networking and NFV providers to operate a red hat or its variants such as CentOS based cloud platform with OpenContrail networking.

1.1   Topology

rdo_openstack_opencontrail

In this topology there are 3 nodes running RDO openstack controller, opencontrail controller and opencontrail Vrouter. A simple clos configuration between leaf and spine is configured and Gateway router is configured for sending and receiving external traffic from outside the cloud.

** Please note the setup will also work in a Virtualized environment with RDO openstack controller, opencontrail controller and opencontrail compute being virtual machines.

1.2    Pre-requisites

Make sure that the following are configured.

  1. Network configuration for server, switch and gateway router
  2. Network connectivity for the servers
  3. Servers have Virtualization enabled in BIOS. If running on VM environment then nested virtualization should be enabled on the host.
  4. RDO openstack controller should have Internet connectivity.
  5. Contrail Controller and Vrouter nodes need internet connectivity if opencontrail packages from being downloaded from amazon S3 servers.  ** In case opencontrail packages are pre-downloaded onto a repo server, then that repo server can be configured in the yum.repos.d
  6. Servers are imaged with CentOS 6.5 minimal.
1.3 CentOS and Redhat versions

CentOS version that is used for the setup is 6.5. Redhat version 7 can also be used with the RDO Juno and Icehouse releases. Opencontrail packages are built for Redhat 7 but at the time of this integration were not fully supported. Hence the setup is based on CentOS 6.5.

1.4 Kernel Version

Kernel version used is 2.6.32-431 for RDO openstack, opencontrail controller and compute.

2. RDO Openstack

RDO openstack is a community version of the red hat enterprise Linux openstack platform.  Red hat actively up streams code and features to openstack and is an active contributor. RDO openstack includes all the components of openstack from identity service to compute and telemetry.

More details on the project can be found at https://www.rdoproject.org/Main_Page

2.1     Repo settings

On the RDO openstack controller node, ensure that OS version is centos 6.5 with the kernel version 2.6.32-431.

A) Set EPEL (Extra Packages for Enterise Linux) GPG keys for 6

rpm --import http://mirrors.nayatel.com/epel//RPM-GPG-KEY-EPEL-6

B) Set EPEL repo

rpm -Uvh http://mirrors.nayatel.com/epel/6/x86_64/epel-release-6-8.noarch.rpm

OR

rpm –Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

C) Verify repo

yum repolost

D) Check and download iptables

yum install iptables-services
service iptables restart

E) Yum update

yum update –y

F) Install RDO packstack for openstack install and provisioning

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
 2.2 Installation

Once the repo is setup, it is ready to install. RDO use packstack for downloading and installing packages. Foreman can also be used for deploying RDO openstack.  Packstack uses its own puppet manifests, templates and modules to deploy openstack.

A) Install openstack-packstack

yum install openstack-packstack
2.3 Provisioning

Once packstack is successfully downloaded, it is good to generate the answers file that can be later edited to provide values that are relevant for the deployment.

A) Generate answers file

packstack --gen-answer-file rdo_icehouse.txt

(This command generates a text file with all default configuration for installing contrail and the required passwords and tokens. Once this the file is generated with defaults, it can be edited to replace the tokens and the IP address of hosts and the roles to provision on those hosts)

B) Edit answers file

Example: rdo_icehouse.txt


==========================================================================
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

# Set to 'y' if you would like Packstack to install MariaDB
CONFIG_MARIADB_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Image
# Service (Glance)
CONFIG_GLANCE_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Block
# Storage (Cinder)
CONFIG_CINDER_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Compute
# (Nova)
CONFIG_NOVA_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack
# Networking (Neutron). Otherwise Nova Network will be used.
CONFIG_NEUTRON_INSTALL=n

# Set to 'y' if you would like Packstack to install OpenStack
# Dashboard (Horizon)
CONFIG_HORIZON_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Object
# Storage (Swift)
CONFIG_SWIFT_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack
# Metering (Ceilometer)
CONFIG_CEILOMETER_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack
# Orchestration (Heat)
CONFIG_HEAT_INSTALL=y

# Set to 'y' if you would like Packstack to install the OpenStack
# Client packages. An admin "rc" file will also be installed
CONFIG_CLIENT_INSTALL=y

# Comma separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=172.17.38.152

# Set to 'y' if you would like Packstack to install Nagios to monitor
# OpenStack hosts
CONFIG_NAGIOS_INSTALL=y

# Comma separated list of servers to be excluded from installation in
# case you are running Packstack the second time with the same answer
# file and don't want Packstack to touch these servers. Leave plain if
# you don't need to exclude any server.
EXCLUDE_SERVERS=

# Set to 'y' if you want to run OpenStack services in debug mode.
# Otherwise set to 'n'.
CONFIG_DEBUG_MODE=n

# The IP address of the server on which to install OpenStack services
# specific to controller role such as API servers, Horizon, etc.
CONFIG_CONTROLLER_HOST=10.102.67.87

# The list of IP addresses of the server on which to install the Nova
# compute service
#CONFIG_COMPUTE_HOSTS=

# The list of IP addresses of the server on which to install the
# network service such as Nova network or Neutron
#CONFIG_NETWORK_HOSTS=10.102.67.87

# Set to 'y' if you want to use VMware vCenter as hypervisor and
# storage. Otherwise set to 'n'.
CONFIG_VMWARE_BACKEND=n

# Set to 'y' if you want to use unsupported parameters. This should
# be used only if you know what you are doing.Issues caused by using
# unsupported options won't be fixed before next major release.
CONFIG_UNSUPPORTED=n

# The IP address of the VMware vCenter server
#CONFIG_VCENTER_HOST=

# The username to authenticate to VMware vCenter server
CONFIG_VCENTER_USER=n

# The password to authenticate to VMware vCenter server
CONFIG_VCENTER_PASSWORD=

# The name of the vCenter cluster
CONFIG_VCENTER_CLUSTER_NAME=

# (Unsupported!) The IP address of the server on which to install
# OpenStack services specific to storage servers such as Glance and
# Cinder.
CONFIG_STORAGE_HOST=10.102.67.87

# To subscribe each server to EPEL enter "y"
CONFIG_USE_EPEL=y

# A comma separated list of URLs to any additional yum repositories
# to install
CONFIG_REPO=

# To subscribe each server with Red Hat subscription manager, include
# this with CONFIG_RH_PW
CONFIG_RH_USER=

# To subscribe each server with RHN Satellite,fill Satellite's URL
# here. Note that either satellite's username/password or activation
# key has to be provided
CONFIG_SATELLITE_URL=

# To subscribe each server with Red Hat subscription manager, include
# this with CONFIG_RH_USER
CONFIG_RH_PW=

# To enable RHEL optional repos use value "y"
CONFIG_RH_OPTIONAL=y

# Specify a HTTP proxy to use with Red Hat subscription manager
CONFIG_RH_PROXY=

# Specify port of Red Hat subscription manager HTTP proxy
CONFIG_RH_PROXY_PORT=

# Specify a username to use with Red Hat subscription manager HTTP
# proxy
CONFIG_RH_PROXY_USER=

# Specify a password to use with Red Hat subscription manager HTTP
# proxy
CONFIG_RH_PROXY_PW=

# Username to access RHN Satellite
CONFIG_SATELLITE_USER=

# Password to access RHN Satellite
CONFIG_SATELLITE_PW=

# Activation key for subscription to RHN Satellite
CONFIG_SATELLITE_AKEY=

# Specify a path or URL to a SSL CA certificate to use
CONFIG_SATELLITE_CACERT=

# If required specify the profile name that should be used as an
# identifier for the system in RHN Satellite
CONFIG_SATELLITE_PROFILE=

# Comma separated list of flags passed to rhnreg_ks. Valid flags are:
# novirtinfo, norhnsd, nopackages
CONFIG_SATELLITE_FLAGS=

# Specify a HTTP proxy to use with RHN Satellite
CONFIG_SATELLITE_PROXY=

# Specify a username to use with an authenticated HTTP proxy
CONFIG_SATELLITE_PROXY_USER=

# Specify a password to use with an authenticated HTTP proxy.
CONFIG_SATELLITE_PROXY_PW=

# Set the AMQP service backend. Allowed values are: qpid, rabbitmq
CONFIG_AMQP_BACKEND=rabbitmq

# The IP address of the server on which to install the AMQP service
CONFIG_AMQP_HOST=10.102.67.87

# Enable SSL for the AMQP service
CONFIG_AMQP_ENABLE_SSL=n

# Enable Authentication for the AMQP service
CONFIG_AMQP_ENABLE_AUTH=n

# The password for the NSS certificate database of the AMQP service
CONFIG_AMQP_NSS_CERTDB_PW=64f4c2d80c334ddd9173b0f50e182d49

# The port in which the AMQP service listens to SSL connections
CONFIG_AMQP_SSL_PORT=5671

# The filename of the certificate that the AMQP service is going to
# use
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

# The filename of the private key that the AMQP service is going to
# use
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

# Auto Generates self signed SSL certificate and key
CONFIG_AMQP_SSL_SELF_SIGNED=y

# User for amqp authentication
CONFIG_AMQP_AUTH_USER=amqp_user

# Password for user authentication
CONFIG_AMQP_AUTH_PASSWORD=90f5257d73b64dc3

# The IP address of the server on which to install MariaDB or IP
# address of DB server to use if MariaDB installation was not selected
CONFIG_MARIADB_HOST=10.102.67.87

# Username for the MariaDB admin user
CONFIG_MARIADB_USER=root

# Password for the MariaDB admin user
CONFIG_MARIADB_PW=a6b67012941b462d

# The password to use for the Keystone to access DB
CONFIG_KEYSTONE_DB_PW=a303dfe6259640af

# The token to use for the Keystone service api
CONFIG_KEYSTONE_ADMIN_TOKEN=60f42f9cb54e47e59a0cf7e90b097b89

# The password to use for the Keystone admin user
CONFIG_KEYSTONE_ADMIN_PW=270acd6786ef4445

# The password to use for the Keystone demo user
CONFIG_KEYSTONE_DEMO_PW=fa106b29ccf64270

# Kestone token format. Use either UUID or PKI
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

# The password to use for the Glance to access DB
CONFIG_GLANCE_DB_PW=b03b8e2c98ec40c8

# The password to use for the Glance to authenticate with Keystone
CONFIG_GLANCE_KS_PW=8b21c4509d1a4b24

# The password to use for the Cinder to access DB
CONFIG_CINDER_DB_PW=56d1d746dde64b20

# The password to use for the Cinder to authenticate with Keystone
CONFIG_CINDER_KS_PW=0f3ad3d4919a4c65

# The Cinder backend to use, valid options are: lvm, gluster, nfs
CONFIG_CINDER_BACKEND=lvm

# Create Cinder's volumes group. This should only be done for testing
# on a proof-of-concept installation of Cinder. This will create a
# file-backed volume group and is not suitable for production usage.
CONFIG_CINDER_VOLUMES_CREATE=y

# Cinder's volumes group size. Note that actual volume size will be
# extended with 3% more space for VG metadata.
CONFIG_CINDER_VOLUMES_SIZE=20G

# A single or comma separated list of gluster volume shares to mount,
# eg: ip-address:/vol-name, domain:/vol-name
CONFIG_CINDER_GLUSTER_MOUNTS=

# A single or comma seprated list of NFS exports to mount, eg: ip-
# address:/export-name
CONFIG_CINDER_NFS_MOUNTS=

# The password to use for the Nova to access DB
CONFIG_NOVA_DB_PW=09d0f8a58fa0422d

# The password to use for the Nova to authenticate with Keystone
CONFIG_NOVA_KS_PW=7bd74cc65c8b4bf1

# The overcommitment ratio for virtual to physical CPUs. Set to 1.0
# to disable CPU overcommitment
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

# The overcommitment ratio for virtual to physical RAM. Set to 1.0 to
# disable RAM overcommitment
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

# Protocol used for instance migration. Allowed values are tcp and
# ssh. Note that by defaul nova user is created with /sbin/nologin
# shell so that ssh protocol won't be working. To make ssh protocol
# work you have to fix nova user on compute hosts manually.
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

# Private interface for Flat DHCP on the Nova compute servers
CONFIG_NOVA_COMPUTE_PRIVIF=eth1

# Nova network manager
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

# Public interface on the Nova network server
CONFIG_NOVA_NETWORK_PUBIF=eth0

# Private interface for network manager on the Nova network server
CONFIG_NOVA_NETWORK_PRIVIF=eth1

# IP Range for network manager
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.84.0/22

# IP Range for Floating IP's
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

# Name of the default floating pool to which the specified floating
# ranges are added to
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

# Automatically assign a floating IP to new instances
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

# First VLAN for private networks
CONFIG_NOVA_NETWORK_VLAN_START=100

# Number of networks to support
CONFIG_NOVA_NETWORK_NUMBER=1

# Number of addresses in each private subnet
CONFIG_NOVA_NETWORK_SIZE=255

# The password to use for Neutron to authenticate with Keystone
CONFIG_NEUTRON_KS_PW=0ccfa1a77b2b4ef4

# The password to use for Neutron to access DB
CONFIG_NEUTRON_DB_PW=814954e9f21246d0

# The name of the bridge that the Neutron L3 agent will use for
# external traffic, or 'provider' if using provider networks
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

# The name of the L2 plugin to be used with Neutron. (eg.
# linuxbridge, openvswitch, ml2)
CONFIG_NEUTRON_L2_PLUGIN=ml2

# Neutron metadata agent password
CONFIG_NEUTRON_METADATA_PW=050906690f644331

# Set to 'y' if you would like Packstack to install Neutron LBaaS
CONFIG_LBAAS_INSTALL=n

# Set to 'y' if you would like Packstack to install Neutron L3
# Metering agent
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n

# Whether to configure neutron Firewall as a Service
CONFIG_NEUTRON_FWAAS=n

# A comma separated list of network type driver entrypoints to be
# loaded from the neutron.ml2.type_drivers namespace.
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan

# A comma separated ordered list of network_types to allocate as
# tenant networks. The value 'local' is only useful for single-box
# testing but provides no connectivity between hosts.
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan

# A comma separated ordered list of networking mechanism driver
# entrypoints to be loaded from the neutron.ml2.mechanism_drivers
# namespace.
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch

# A comma separated  list of physical_network names with which flat
# networks can be created. Use * to allow flat networks with arbitrary
# physical_network names.
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*

# A comma separated list of <physical_network>:<vlan_min>:<vlan_max>
# or <physical_network> specifying physical_network names usable for
# VLAN provider and tenant networks, as well as ranges of VLAN tags on
# each available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=

# A comma separated list of <tun_min>:<tun_max> tuples enumerating
# ranges of GRE tunnel IDs that are available for tenant network
# allocation. Should be an array with tun_max +1 - tun_min > 1000000
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=

# Multicast group for VXLAN. If unset, disables VXLAN enable sending
# allocate broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode. Should be an
# Multicast IP (v4 or v6) address.
CONFIG_NEUTRON_ML2_VXLAN_GROUP=

# A comma separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network
# allocation. Min value is 0 and Max value is 16777215.
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100

# The name of the L2 agent to be used with Neutron
CONFIG_NEUTRON_L2_AGENT=openvswitch

# The type of network to allocate for tenant networks (eg. vlan,
# local)
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

# A comma separated list of VLAN ranges for the Neutron linuxbridge
# plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999)
CONFIG_NEUTRON_LB_VLAN_RANGES=

# A comma separated list of interface mappings for the Neutron
# linuxbridge plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3
# :br-eth3)
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

# Type of network to allocate for tenant networks (eg. vlan, local,
# gre, vxlan)
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan

# A comma separated list of VLAN ranges for the Neutron openvswitch
# plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999)
CONFIG_NEUTRON_OVS_VLAN_RANGES=

# A comma separated list of bridge mappings for the Neutron
# openvswitch plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3
# :br-eth3)
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=

# A comma separated list of colon-separated OVS bridge:interface
# pairs. The interface will be added to the associated bridge.
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

# A comma separated list of tunnel ranges for the Neutron openvswitch
# plugin (eg. 1:1000)
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=

# The interface for the OVS tunnel. Packstack will override the IP
# address used for tunnels on this hypervisor to the IP found on the
# specified interface. (eg. eth1)
CONFIG_NEUTRON_OVS_TUNNEL_IF=

# VXLAN UDP port
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

# To set up Horizon communication over https set this to 'y'
CONFIG_HORIZON_SSL=n

# PEM encoded certificate to be used for ssl on the https server,
# leave blank if one should be generated, this certificate should not
# require a passphrase
CONFIG_SSL_CERT=

# SSL keyfile corresponding to the certificate if one was entered
CONFIG_SSL_KEY=

# PEM encoded CA certificates from which the certificate chain of the
# server certificate can be assembled.
CONFIG_SSL_CACHAIN=

# The password to use for the Swift to authenticate with Keystone
CONFIG_SWIFT_KS_PW=73539ee52a984c26

# A comma separated list of devices which to use as Swift Storage
# device. Each entry should take the format /path/to/dev, for example
# /dev/vdb will install /dev/vdb as Swift storage device (packstack
# does not create the filesystem, you must do this first). If value is
# omitted Packstack will create a loopback device for test setup
CONFIG_SWIFT_STORAGES=

# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=1

# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=1

# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4

# Shared secret for Swift
CONFIG_SWIFT_HASH=cefa46226a3c49ea

# Size of the swift loopback file storage device
CONFIG_SWIFT_STORAGE_SIZE=2G

# Whether to provision for demo usage and testing. Note that
# provisioning is only supported for all-in-one installations.
CONFIG_PROVISION_DEMO=y

# Whether to configure tempest for testing
CONFIG_PROVISION_TEMPEST=n

# The name of the Tempest Provisioning user. If you don't provide a
# user name, Tempest will be configured in a standalone mode
CONFIG_PROVISION_TEMPEST_USER=

# The password to use for the Tempest Provisioning user
CONFIG_PROVISION_TEMPEST_USER_PW=3bd9d74798b04658

# The CIDR network address for the floating IP subnet
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

# The uri of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

# The revision of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

# Whether to configure the ovs external bridge in an all-in-one
# deployment
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

# The password used by Heat user to authenticate against MySQL
CONFIG_HEAT_DB_PW=c0fc3b650b9f4211

# The encryption key to use for authentication info in database
CONFIG_HEAT_AUTH_ENC_KEY=f8a1a604a43d48d2

# The password to use for the Heat to authenticate with Keystone
CONFIG_HEAT_KS_PW=5e49bfde738b46d9

# Set to 'y' if you would like Packstack to install Heat CloudWatch
# API
CONFIG_HEAT_CLOUDWATCH_INSTALL=n

# Set to 'y' if you would like Packstack to install Heat with trusts
# as deferred auth method. If not, the stored password method will be
# used.
CONFIG_HEAT_USING_TRUSTS=y

# Set to 'y' if you would like Packstack to install Heat
# CloudFormation API
CONFIG_HEAT_CFN_INSTALL=y

# Name of Keystone domain for Heat
CONFIG_HEAT_DOMAIN=heat

# Name of Keystone domain admin user for Heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin

# Password for Keystone domain admin user for Heat
CONFIG_HEAT_DOMAIN_PASSWORD=1a2d63524415410c

# Secret key for signing metering messages
CONFIG_CEILOMETER_SECRET=f5b59cfe141d4741

# The password to use for Ceilometer to authenticate with Keystone
CONFIG_CEILOMETER_KS_PW=5480608ff7f94980

# The IP address of the server on which to install MongoDB
#CONFIG_MONGODB_HOST=

# The password of the nagiosadmin user on the Nagios server
CONFIG_NAGIOS_PW=ae4cd062316048e5

==========================================================================

***Please note in the answers file there is no compute configured. That can be done as part of contrail Vrouter setup. It also does not have any network configuration.

 

C) Provision RDO openstack controller using the answers file

packstack --answer-file=rdo_icehouse.txt
 2.4 Verification

Execute verification on the openstack controller for the endpoints keystone, glance and nova. Please note that nova compute is still not set up at this stage.

Dashboard should be up and can be checked at http://rdo-controller/dashboard

 3. OpenContrail

Opencontrail packages can be downloaded in one of the 3 ways.

  • Launchpad ppa for contrail. At this time this is only community supported
  • Amazon S3 servers hosting opencontrail official released packages that have passed extensive quality testing
  • Juniper SDN Software downloads. This can be found at “support.juniper.net” SDN- Contrail Software section.

In this setup lets discuss option 3 where packages are downloaded from Juniper.

3.1 Repo Setting

After the package is downloaded from Juniper software downloads, copy that onto the contrail Controller node.

A) Extract the contents of the package

rpm -ivfh contrail-install-packages-2.10-46~icehouse.el7.noarch.rpm

B) Run setup.sh that is provided in contrail_packages

/opt/contrail/contrail_packages/setup.sh

**Note there can be issues with openstack-utils, python-pip and python-netifaces.
Please ensure the fedora epel and openstack-packstack repos are setup on contrail controller and then yum install the required packages.

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm

C) Once above step is done /opt/contrail/contrail_install_repo is created and yum.repos.d is updated with this.

yum repolist
3.2 Installation

A) Contrail-Database:

Opencontrail uses Cassandra. This can be downloaded from datastax or from contrail repo. Make sure java version is 7 and install java version 7

yum install java-1.7.0-openjdk.x86_64

cat /etc/yum.repos.d/datastax.repo
[datastax]
name = DataStax Repo for Apache Cassandra
baseurl = http://rpm.datastax.com/community
enabled = 1
gpgcheck = 0

yum install dsc20-2.0.11-1 cassandra20-2.0.11-1

Also make sure zookeeper is installed.

yum install zookeeper
chkconfig zookeeper-server on
service cassandra start
service zookeeper-server init --myid=1
service zookeeper-server start

B) Contrail-Analytics and supervisor

yum install python-keystone
yum install contrail-analytics
yum install supervisor

C) Contrail-Config

yum install contrail-config
yum install ifmap-server-0.3.2-2contrail.el7.noarch.rpm
yum install contrail-config-openstack

D) Contrail-Control:

yum install contrail-control
yum install contrail-dns

E) Contrail-WebUI:

yum --disablerepo=* --enablerepo=contrail_install_repo -y install contrail-web-core-2.10-46.x86_64.rpm
yum --disablerepo=* --enablerepo=contrail_install_repo -y install contrail-web-controller-2.10-46.x86_64.rpm
yum --disablerepo=* --enablerepo=contrail_install_repo -y install contrail-openstack-webui-2.10-46.el7.noarch.rpm

F) AMQP:

yum install rabbitmq-server

G) Neutron server:

yum --disablerepo=* --enablerepo=contrail_install_repo -y install neutron-plugin-contrail-2.10-46.el7.noarch.rpm
yum --disablerepo=* --enablerepo=openstack-icehouse -y install python-neutron
yum install python-httplib2
yum install pyparsing
yum install python-mako
yum install -y python-cliff
yum intall -y python-oslo-rootwrap
yum install -y python-markupsafe
3.3 Provisioning

Provisioning of contrail components can be done in 3 ways.

  • Fab scripts (if contrail-fabric-utils-2.10-47.noarch and contrail-setup-2.10-47.el6.noarch are installed)
  • Puppet manifest (https://github.com/Juniper/contrail-puppet) OR Contrail Chef (https://github.com/Juniper/contrail-chef) recipes from github.

Puppet can be installed by following the steps in         (https://docs.puppetlabs.com/guides/install_puppet/post_install.html) and chef can be installed by following steps in (http://gettingstartedwithchef.com/)

  • Check on the template files and edit manually

Edit the config file under /etc/contrail for each of the contrail component and update values accordingly.

 3.4 Vrouter Setup

On the Vrouter node follow the same steps to setup the repo. Once the repo is setup install the required Vrouter packages:

yum install libvirt
yum install openstack-nova-compute
yum install contrail-openstack-vrouter
yum install contrail-vrouter
yum install python-opencontrail-vrouter-netns
yum install contrail-vrouter-agent
3.5 Vrouter, control and metadata Provision:

Edit /etc/contrail files for Vrouter agent. Also ensure interface vhost0 is configured properly and most importantly the Vrouter kernel module is under /lib/modules/2.6.32-431.el6.x86_64/extra/net/vrouter/

** 2.6.32-431.el6.x86_64 in case of CentOS 6.5

Check and create the interfaces /etc/sysconfig/network-scripts/ifcfg-vhost0 and modify the bond0 or the physical ethX (X being the interface number) for control and data traffic.

Execute the following provisioning script to setup control node BGP peers, external BGP peer, metadata service and encap if required. These functions can also be added from the Contrail Web GUI

python /opt/contrail/utils/provision_control.py –host_name <> –host_ip <> –oper add

Example: python /opt/contrail/utils/provision_control.py –host_name  contrail-controlNode

–host_ip 10.10.0.10 –oper add

python /opt/contrail/utils/provision_mx.py –api_server_ip <> –api_server_port <> –router_name <> –router_ip <> –router_asn <>

Example: python /opt/contrail/utils/provision_mx.py –api_server_ip 10.10.0.9 –api_server_port 8082 –router_name MX1 –router_ip 10.10.0.100 –router_asn 64512

python /opt/contrail/utils/provision_linklocal.py –admin_user <> –admin_password <> –ipfabric_service_ip <> –api_server_ip <> –linklocal_service_name metadata –linklocal_service_ip 169.254.169.254 –linklocal_service_port 80 –ipfabric_service_port <> –oper add

Example: python /opt/contrail/utils/provision_linklocal.py –admin_user admin –admin_password secret123 –ipfabric_service_ip 10.0.10.8 –api_server_ip 10.10.0.9  –linklocal_service_name metadata –linklocal_service_ip 169.254.169.254 –linklocal_service_port 80 –ipfabric_service_port <> –oper add

python /opt/contrail/utils/provision_encap.py  –admin_user <> –admin_password <> –encap_priority MPLSoUDP,MPLSoGRE,VXLAN –oper add

Example: python /opt/contrail/utils/provision_encap.py  –admin_user admin –admin_password secret123 –encap_priority MPLSoUDP,MPLSoGRE,VXLAN –oper add

** Please ensure contrail-config services are up and contrail-api is running and ACTIVE.

Once it is verified that all the configurations are updated, reboot the compute node.

3.6 Verification

Once the compute node reboots successfully, check for the following:

lsmod |grep Vrouter

Above command should return:

vrouter               217798  1

Also check for the /etc/sysconfig/network-scripts/ifcfg-vhost0 and any manually or fab / puppet / chef based migration of the static routes from the physical interface to vhost0

Check from the contrail controller node for “contrail-status” (command that can be run on the controller to provide process status). All the contrail processes should be in active state.

Check node status for each process to make sure that all the dependencies are up.

4. Integration touch points

Contrail integration with RDO openstack involves updates neutron endpoints in keystone endpoint and also creating neutron user, role and assigning neutron to services tenant.

4.1 Keystone endpoint update
keystone user-create --name=neutron --pass=neutron

keystone user-role-add --user=neutron --tenant=service --role=admin

keystone service-create --name=neutron --type=network      --description="OpenStack Networking Service"

keystone endpoint-create --service-id 121ad615a4b54bb697bf11a7a30ebd52 --publicurl http://10.102.67.89:9696 --insternalurl http://40.1.1.18:9696 --adminurl http://40.1.1.18:9696

Note: The publicURL is the routable IP of the node were neutron-server is configured. InternalURL and adminURL are the private (control / data interface) IP of the node.

4.2Opencontrail Neutron configuration

Update /etc/neutron/neutron.conf with the core plugin as NeutronPlugingContrailCoreV2 and the relevant keystone configuration

4.3 Contrail-API Configuration

On Contrail-controller update contrail-api configuration to point to keystone. Update /etc/contrail/vnc_api_lib.ini and /etc/contrail/contrail-api.conf  OR /etc/contrail/ contrail-keystone-auth.conf with the relevant keystone section

Example:

/etc/contrail/vnc_api_lib.ini
[auth]
AUTHN_TYPE = keystone
AUTHN_PROTOCOL = http
AUTHN_SERVER=40.1.1.50
AUTHN_PORT = 35357
AUTHN_URL = /v2.0/tokens
/etc/contrail/contrail-keystone-auth.conf
[KEYSTONE]
auth_host=40.1.1.50
auth_protocol=http
auth_port=35357
admin_user=admin
admin_password=contrail123
admin_token=48ede9b3a0dc5d1795d3
admin_tenant_name=admin
insecure=False
memcache_servers=127.0.0.1:11211

Note: Above configuration is related to API service connecting to keystone and the admin token of the admin user.

4.4 Nova Configuration

Update /etc/nova/nova.conf on RDO openstack controller and on the compute node to have neutron section added with the new neutron server URL, user and password.

Example:

neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = contrail123
neutron_admin_auth_url = http://40.1.1.50:35357/v2.0/
neutron_url = http://40.1.1.50:9696/
neutron_url_timeout = 300
security_group_api = neutron
service_neutron_metadata_proxy = True
neutron_auth_strategy = keystone

Verify the end-to-end setup by creating Virtual Networks and instance

5. Conclusion

The information provided in this document assumes some level on understanding on CentOS, openstack and contrail. There could be package dependencies in some cases that may need to be resolved.  Please check the README for the third party packages and components and the version dependencies

Back to Top
Contact us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Questions, issues or concerns? I'd love to help you!

Click ENTER to chat