In this post we are going to explore a fully automated way of provisioning LXC containers on a set of servers, using OpenStack.
OpenStack is a cloud operating system that allows for the provisioning of virtual machines, LXC containers, load balancers, databases, and storage and network resources in a centralized, yet modular and extensible way. It’s ideal for managing a set of compute resources (servers) and selecting the best candidate target to provision services on based on criteria such as CPU load, memory utilization, VM/container density, to name just few.
In this blog we are going to deploy the following OpenStack components and services:
· Deploy the Keystone identity service that will provide a central directory of users and services and a simple way to authenticate using tokens.
· Install the Nova compute controller, which will manage a pool of servers and provision LXC containers on them.
· Configure the Glance image repository, which will store the LXC images.
· Provision the Neutron networking service that will manage DHCP, DNS and the network bridging on the compute hosts.
· And finally, we are going to provision an LXC container using the libvirt OpenStack driver.
Deploying OpenStack with LXC support on Ubuntu
An OpenStack deployment may consist of multiple components that interact with each other through exposed APIs, or a message bus like RabbitMQ.
We are going to deploy a minimum set of those components – Keystone, Glance, Nova and Neutron - which will be sufficient to provision LXC containers and still take advantage of the scheduler logic and scalable networking that OpenStack provides.
For this tutorial we are going to be using Ubuntu Xenial and, as of time of this writing, the latest Newton OpenStack release.
Preparing the host
To simplify things, we are going to use a single server to hosts all services. In production environments it’s a common approach to separate each service into its own set of servers for scalability and high availability. By following the steps in this post, you can easily deploy on multiple hosts, by replacing the IP addresses and hostnames as needed.
If using multiple servers, you need to make sure the time is synchronized on all hosts by using services like ntpd.
Let's begin by ensuring we have the latest packages and installing the repository that contains the Newton OpenStack release:
File: gistfile1.txt
-------------------
root@controller:~# apt install software-properties-common
root@controller:~# add-apt-repository cloud-archive:newton
root@controller:~# apt update && apt dist-upgrade
root@controller:~# reboot
root@controller:~# apt install python-openstackclient
Make sure to add the name of the server, in this example “controller” to /etc/hosts.
Installing the database service
The services we are going to deploy all use a database as their back-end store. We are going to use MariaDB for this example. Install it by running:
root@controller:~# apt install mariadb-server python-pymysql
A minimal configuration file should look like the following:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 10.208.130.36
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
root@controller:~#
Replace the IP address the service binds to with whatever is on your server, then start the service and run the script that will secure the installation:
File: gistfile1.txt
-------------------
root@controller:~# service mysql restart
root@controller:~# mysql_secure_installation
The command above will prompt for a new root password. For simplicity we are going to use “lxcpassword” as a password for all services, for the rest of the post.
Installing the message queue service
OpenStack supports the following message queues – RabbitMQ, Qpid and ZeroMQ – which facilitate inter-process communication between services. We are going to use RabbitMQ:
File: gistfile1.txt
-------------------
root@controller:~# apt install rabbitmq-server
Add a new user and a password:
File: gistfile1.txt
-------------------
root@controller:~# rabbitmqctl add_user openstack lxcpassword
Creating user "openstack" ...
root@controller:~#
And grant permissions for that user:
File: gistfile1.txt
-------------------
root@controller:~# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
root@controller:~#
Installing the caching service
The identity service Keystone caches authentication tokens using Memcached. To install it execute:
File: gistfile1.txt
-------------------
root@controller:~# apt install memcached python-memcache
Replace the localhost address with the IP address of your server:
File: gistfile1.txt
-------------------
root@controller:~# sed -i 's/127.0.0.1/10.208.130.36/g' /etc/memcached.conf
The config file should look similar to the following:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/memcached.conf | grep -vi "#" | sed '/^$/d'
-d
logfile /var/log/memcached.log
-m 64
-p 11211
-u memcache
-l 10.208.130.36
root@controller:~# service memcached restart
Installing and configuring Identity service
The Keystone identity service provides a centralized point for managing authentication and authorization for the rest of the OpenStack components. Keystone also keeps a catalog of services and the endpoints they provide, that the user can locate by querying it.
To deploy Keystone, first create a database and grant permissions to the keystone user:
File: gistfile1.txt
-------------------
root@controller:~# mysql -u root -plxcpassword
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.01 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.01 sec)
MariaDB [(none)]> exit
Bye
root@controller:~#
Next install the identity service components:
File: gistfile1.txt
-------------------
root@controller:~# apt install keystone
The following is a minimal working configuration for Keystone:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/keystone/keystone.conf
[DEFAULT]
log_dir = /var/log/keystone
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:lxcpassword@controller/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[memcache]
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[profiler]
[resource]
[revoke]
[role]
[saml]
[security_compliance]
[shadow_users]
[signing]
[token]
provider = fernet
[tokenless_auth]
[trust]
[extra_headers]
Distribution = Ubuntu
root@controller:~#
If you are using the same hostname and password as in this tutorial, no changes are required.
Next, populate the Keystone database by running:
File: gistfile1.txt
-------------------
root@controller:~# su -s /bin/sh -c "keystone-manage db_sync" keystone
...
root@controller:~#
Keystone uses tokens to authenticate and authorize users and services. There are different token formats available such as UUID, PKI and Fernet tokens. For this example deployment we are going to use the Fernet tokens, which unlike the other types do not need to be persisted in a back end. To initialize the Fernet key repositories run:
File: gistfile1.txt
-------------------
root@controller:~# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
root@controller:~# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
root@controller:~#
For more information on the available identity tokens refer to http://docs.openstack.org/admin-guide/identity-tokens.html
Perform the basic bootstrap process by executing:
File: gistfile1.txt
-------------------
root@controller:~# keystone-manage bootstrap --bootstrap-password lxcpassword --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:35357/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
root@controller:~#
We are going to use Apache with the WSGI module to drive Keystone. Add the following stanza in the Apache config file and restart it:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/apache2/apache2.conf
...
ServerName controller
...
root@controller:~# service apache2 restart
Delete the default SQLite database that Keystone ships with:
File: gistfile1.txt
-------------------
root@controller:~# rm -f /var/lib/keystone/keystone.db
Let's create the administrative account by defining the following environment variables:
File: gistfile1.txt
-------------------
root@controller:~# export OS_USERNAME=admin
root@controller:~# export OS_PASSWORD=lxcpassword
root@controller:~# export OS_PROJECT_NAME=admin
root@controller:~# export OS_USER_DOMAIN_NAME=default
root@controller:~# export OS_PROJECT_DOMAIN_NAME=default
root@controller:~# export OS_AUTH_URL=http://controller:35357/v3
root@controller:~# export OS_IDENTITY_API_VERSION=3
Time to create our first project in Keystone. Projects represent a unit of ownership, where all resources are owned by a project. The “service” project we are going to create next will be used by all the services we are going to deploy in this post.
File: gistfile1.txt
-------------------
root@controller:~# openstack project create --domain default --description "LXC Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | LXC Project |
| domain_id | default |
| enabled | True |
| id | 9a1a863fe41b42b2955b313f2cca0ef0 |
| is_domain | False |
| name | service |
| parent_id | default |
+-------------+----------------------------------+
root@controller:~#
To list the available projects run:
File: gistfile1.txt
-------------------
root@controller:~# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 06f4e2d7e384474781803395b24b3af2 | admin |
| 9a1a863fe41b42b2955b313f2cca0ef0 | service |
+----------------------------------+---------+
root@controller:~#
Lets create an unprivileged project and user that can be used by regular users instead of the OpenStack services:
File: gistfile1.txt
-------------------
root@controller:~# openstack project create --domain default --description "LXC User Project" lxc
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | LXC User Project |
| domain_id | default |
| enabled | True |
| id | eb9cdc2c2b4e4f098f2d104752970d52 |
| is_domain | False |
| name | lxc |
| parent_id | default |
+-------------+----------------------------------+
root@controller:~#
root@controller:~# openstack user create --domain default --password-prompt lxc
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 1e83e0c8ca194f2e9d8161eb61d21030 |
| name | lxc |
| password_expires_at | None |
+---------------------+----------------------------------+
root@controller:~#
Next, create a user role and associate it with the lxc project and user we created in the previous two steps:
File: gistfile1.txt
-------------------
root@controller:~# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 331c0b61e9784112874627264f03a058 |
| name | user |
+-----------+----------------------------------+
root@controller:~# openstack role add --project lxc --user lxc user
root@controller:~#
Use the following file to configure the Web Service Gateway Interface (WSGI) middleware pipeline for Keystone:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/keystone/keystone-paste.ini
# Keystone PasteDeploy configuration file.
[filter:debug]
use = egg:oslo.middleware#debug
[filter:request_id]
use = egg:oslo.middleware#request_id
[filter:build_auth_context]
use = egg:keystone#build_auth_context
[filter:token_auth]
use = egg:keystone#token_auth
[filter:admin_token_auth]
use = egg:keystone#admin_token_auth
[filter:json_body]
use = egg:keystone#json_body
[filter:cors]
use = egg:oslo.middleware#cors
oslo_config_project = keystone
[filter:http_proxy_to_wsgi]
use = egg:oslo.middleware#http_proxy_to_wsgi
[filter:ec2_extension]
use = egg:keystone#ec2_extension
[filter:ec2_extension_v3]
use = egg:keystone#ec2_extension_v3
[filter:s3_extension]
use = egg:keystone#s3_extension
[filter:url_normalize]
use = egg:keystone#url_normalize
[filter:sizelimit]
use = egg:oslo.middleware#sizelimit
[filter:osprofiler]
use = egg:osprofiler#osprofiler
[app:public_service]
use = egg:keystone#public_service
[app:service_v3]
use = egg:keystone#service_v3
[app:admin_service]
use = egg:keystone#admin_service
[pipeline:public_api]
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension public_service<
[pipeline:admin_api]
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension s3_extension admin_service
[pipeline:api_v3]
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
[app:public_version_service]
use = egg:keystone#public_version_service
[app:admin_version_service]
use = egg:keystone#admin_version_service
[pipeline:public_version_api]
pipeline = cors sizelimit osprofiler url_normalize public_version_service
[pipeline:admin_version_api]
pipeline = cors sizelimit osprofiler url_normalize admin_version_service
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
root@controller:~#
Let's test the configuration so far, by requesting a token for the admin and the lxc users:
File: gistfile1.txt
-------------------
root@controller:~# openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
Password:
...
root@controller:~# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name lxc --os-username lxc token issue
Password:
...
root@controller:~#
We can create two files that will contain the admin and user credentials we configured earlier:
File: gistfile1.txt
-------------------
root@controller:~# cat rc.admin
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=lxcpassword
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
root@controller:~#
root@controller:~# cat rc.lxc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=lxc
export OS_USERNAME=lxc
export OS_PASSWORD=lxcpassword
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
root@controller:~#
To use the admin user for example, source the file as follows:
File: gistfile1.txt
-------------------
root@controller:~# . rc.admin
Notice the new environment variables:
File: gistfile1.txt
-------------------
root@controller:~# env | grep ^OS
OS_USER_DOMAIN_NAME=default
OS_IMAGE_API_VERSION=2
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=lxcpassword
OS_AUTH_URL=http://controller:35357/v3
OS_USERNAME=admin
OS_PROJECT_DOMAIN_NAME=default
root@controller:~#
With the admin credentials loaded, lets request an authentication token that we can use later with the other OpenStack services:
File: gistfile1.txt
-------------------
Installing and configuring Image service
The image service provides an API for users to discover, register and obtain images for virtual machines, or images that can be used as the root filesystem for LXC containers. Glance supports multiple storage backends, but for simplicity we are going to use the file store, that will keep the LXC image directly on the file system.
To deploy Glance, first create a database and a user, like we did for Keystone:
File: gistfile1.txt
-------------------
root@controller:~# mysql -u root -plxcpassword
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
root@controller:~#
Next, create the glance user and add it to the admin role:
File: gistfile1.txt
-------------------
root@controller:~# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | ce29b972845d4d77978b7e7803275d53 |
| name | glance |
| password_expires_at | None |
+---------------------+----------------------------------+
root@controller:~# openstack role add --project service --user glance admin
root@controller:~#
Time to create the Glance service record:
File: gistfile1.txt
-------------------
root@controller:~# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 2aa82fc0a0224baab8d259e4f5279907 |
| name | glance |
| type | image |
+-------------+----------------------------------+
root@controller:~#
Create the Glance API endpoints in Keystone:
File: gistfile1.txt
-------------------
root@controller:~# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | aa26c33d456d421ca3555e6523c7814f |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2aa82fc0a0224baab8d259e4f5279907 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
root@controller:~#
OpenStack supports multi-region deployments for achieving high availability. For simplicity however, we are going to deploy all services in the same region.
File: gistfile1.txt
-------------------
root@controller:~# openstack endpoint create --region RegionOne image internal http://controller:9292
...
root@controller:~# openstack endpoint create --region RegionOne image admin http://controller:9292
...
root@controller:~#
Now that Keystone knows about the Glance service, lets install it:
File: gistfile1.txt
-------------------
root@controller:~# apt install glance
Use the following two minimal configuration files, replacing the password and hostname as needed:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/glance/glance-api.conf
[DEFAULT]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://glance:lxcpassword@controller/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,root-tar
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = lxcpassword
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone<
<
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
root@controller:~#
root@controller:~# cat /etc/glance/glance-registry.conf
[DEFAULT]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = lxcpassword
[matchmaker_redis]
[oslo_messaging_amqp
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
root@controller:~#
Populate the Glance database by running:
File: gistfile1.txt
-------------------
root@controller:~# su -s /bin/sh -c "glance-manage db_sync" glance
...
root@controller:~#
Start the Glance services:
File: gistfile1.txt
-------------------
root@controller:~# service glance-registry restart<
root@controller:~# service glance-api restart
root@controller:~#
We can build an image for the LXC containers by hand, or download a pre-build image from an Ubuntu repository. Lets download an image and extract it:
File: gistfile1.txt
-------------------
root@controller:~# wget http://uec-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz
root@controller:~# tar zxfv ubuntu-12.04-server-cloudimg-amd64.tar.gz
precise-server-cloudimg-amd64.img
precise-server-cloudimg-amd64-vmlinuz-virtual
precise-server-cloudimg-amd64-loader
precise-server-cloudimg-amd64-floppy
README.files
root@controller:~#
The file that contains the rootfs has the .img extension. Lets add it to the Image service:
File: gistfile1.txt
-------------------
root@controller:~# openstack image create "lxc_ubuntu_12.04" --file precise-server-cloudimg-amd64.img --disk-format raw --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | b5e5895e85127d9cebbd2de32d9b193c |
| container_format | bare |
| created_at | 2016-12-02T19:01:28Z |
| disk_format | raw |
| file | /v2/images/f344646d-d293-4638-bab4-86d461f38233/file |
| id | f344646d-d293-4638-bab4-86d461f38233 |
| min_disk | 0 |
| min_ram | 0 |
| name | lxc_ubuntu_12.04 |
| owner | 06f4e2d7e384474781803395b24b3af2 |
| protected | False |
| schema | /v2/schemas/image |
| size | 1476395008 |
| status | active |
| tags | |
| updated_at | 2016-12-02T19:01:41Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
root@controller:~#
Please note that LXC uses the “raw” disk format and the “bare” container format.
The image is now stored at the location defined in the glance-api.conf as the filesystem_store_datadir parameter, as we saw in the configuration example above:
File: gistfile1.txt
-------------------
root@controller:~# ls -la /var/lib/glance/images/
total 1441804
drwxr-xr-x 2 glance glance 4096 Dec 2 19:01 .
drwxr-xr-x 4 glance glance 4096 Dec 2 18:53 ..
-rw-r----- 1 glance glance 1476395008 Dec 2 19:01 f344646d-d293-4638-bab4-86d461f38233
root@controller:~#
Lets list the available images in Glance:
File: gistfile1.txt
-------------------
root@controller:~# openstack image list
+--------------------------------------+------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------+--------+
| f344646d-d293-4638-bab4-86d461f38233 | lxc_ubuntu_12.04 | active |
+--------------------------------------+------------------+--------+
root@controller:~#
Installing and configuring Compute service
The OpenStack Compute service manage a pool of compute resources (servers) and various virtual machines, or containers running on said resources. It provides a scheduler service that takes a request for a new VM or container from the queue and decides on which compute host to create and start it.
For more information on the various Nova services, refer to: http://docs.openstack.org/developer/nova/
Let's begin by creating the nova database and user:
File: gistfile1.txt
-------------------
root@controller:~# mysql -u root -plxcpassword
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.03 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
root@controller:~#
Once the database is created and the user permissions granted, create the nova user and add it to the admin role in the Identity service:
File: gistfile1.txt
-------------------
root@controller:~# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | a1305c903548431a80b608fadf78f287 |
| name | nova |
| password_expires_at | None |
+---------------------+----------------------------------+
root@controller:~# openstack role add --project service --user nova admin
root@controller:~#
Next, create the nova service and endpoints:
File: gistfile1.txt
-------------------
root@controller:~# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 779d2feb591545cf9d2acc18765a0ca5 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
root@controller:~# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | cccfd4d817a24f9ba58128901cbbb473 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 779d2feb591545cf9d2acc18765a0ca5 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
root@controller:~# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
...
root@controller:~# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
...
root@controller:~#
Time to install the nova packages that will provide the API, the conductor, the console and the scheduler services:
File: gistfile1.txt
-------------------
root@controller:~# apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler
The Nova packages we just installed provide the following services:
· The nova-api service accepts and responds to user requests through a RESTful API. We use that for creating, running, stopping instances, etc.
· The nova-conductor service sits between the nova database we created earlier and the nova-compute service, which runs on the compute nodes and creates the VMs and containers. We are going to install that service later in this post.
· The nova-consoleauth service authorizes tokens for users that want to use various consoles to connect to the VMs or containers.
· The nova-novncproxy grants access to instances running VNC.
· The nova-scheduler as mentioned earlier, makes decisions where to provision a VM or LXC container.
The following is a minimal functioning Nova configuration:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/nova/nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
log-dir=/var/log/nova
state_path=/var/lib/nova
force_dhcp_release=True
verbose=True
ec2_private_dns_show_ip=True
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:lxcpassword@controller
auth_strategy = keystone
my_ip = 10.208.132.45
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[database]
connection = mysql+pymysql://nova:lxcpassword@controller/nova
[api_database]
connection = mysql+pymysql://nova:lxcpassword@controller/nova_api
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[libvirt]
use_virtio_for_bridges=True
[wsgi]
api_paste_config=/etc/nova/api-paste.ini
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = lxcpassword
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
root@controller:~#
With the config file in place we can now populate the Nova database:
File: gistfile1.txt
-------------------
root@controller:~# su -s /bin/sh -c "nova-manage api_db sync" nova
...
root@controller:~# su -s /bin/sh -c "nova-manage db sync" nova
...
root@controller:~#
And finally, start the Compute services:
File: gistfile1.txt
-------------------
root@controller:~# service nova-api restart
root@controller:~# service nova-consoleauth restart
root@controller:~# service nova-scheduler restart
root@controller:~# service nova-conductor restart
root@controller:~# service nova-novncproxy restart
root@controller:~#
Since we are going to use a single node for this OpenStack deployment we need to install the nova-compute service. In production, usually we have a pool of compute servers, that only run that service.
File: gistfile1.txt
-------------------
root@controller:~# apt install nova-compute
Use the following minimal configuration file that will allow running nova-compute and the rest of the nova services on the same server:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/nova/nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
log-dir=/var/log/nova
state_path=/var/lib/nova
force_dhcp_release=True
verbose=True
ec2_private_dns_show_ip=True
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:lxcpassword@controller
auth_strategy = keystone<
my_ip = 10.208.132.45
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
[database]
connection = mysql+pymysql://nova:lxcpassword@controller/nova
[api_database]
connection = mysql+pymysql://nova:lxcpassword@controller/nova_api
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[libvirt]
use_virtio_for_bridges=True
[wsgi]
api_paste_config=/etc/nova/api-paste.ini
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = lxcpassword
[vnc]
enabled = True
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[libvirt]
virt_type = lxc
root@controller:~#
Notice under the libvirt section how we specify LXC as the default virtualization type we are going to use. To enable LXC support in Nova install the following package:
File: gistfile1.txt
-------------------
root@controller:~# apt install nova-compute-lxc
The package provides the following configuration file:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/nova/nova-compute.conf
[DEFAULT]
compute_driver=libvirt.LibvirtDriver
[libvirt]
virt_type=lxc
root@controller:~#
Restart the nova-compute service and list all available Nova services:
File: gistfile1.txt
-------------------
root@controller:~# service nova-compute restart
root@controller:~# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 4 | nova-consoleauth | controller | internal | enabled | up | 2016-12-02T20:01:19.000000 |
| 5 | nova-scheduler | controller | internal | enabled | up | 2016-12-02T20:01:25.000000 |
| 6 | nova-conductor | controller | internal | enabled | up | 2016-12-02T20:01:26.000000 |
| 8 | nova-compute | controller | nova | enabled | up | 2016-12-02T20:01:22.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
root@controller:~#
With all the Nova services configured and running, time to move to the networking part of the deployment.
Installing and configuring Networking service
The networking component of OpenStack, codenamed Neutron, manages networks, IP addresses, software bridging and routing. In the previous posts we had to create the Linux Bridge, add ports to it, configure DHCP to assign IPs to the containers, etc. Neutron exposes all of these functionalities through a convenient API and libraries that we can use.
Lets start by creating the database, user and permissions:
File: gistfile1.txt
-------------------
root@controller:~# mysql -u root -plxcpassword
MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'lxcpassword';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
root@controller:~#
Next, create the neutron user and add it to the admin role in Keystone:
File: gistfile1.txt
-------------------
root@controller:~# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 68867f6864574592b1a29ec293defb5d |
| name | neutron |
| password_expires_at | None |
+---------------------+----------------------------------+
root@controller:~# openstack role add --project service --user neutron admin
root@controller:~#
Create the neutron service and endpoints:
File: gistfile1.txt
-------------------
root@controller:~# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 8bd98d58bfb5410694cbf7b6163a71a5 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
root@controller:~# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 4e2c1a8689b146a7b3b4207c63a778da |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8bd98d58bfb5410694cbf7b6163a71a5 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
root@controller:~# openstack endpoint create --region RegionOne network internal http://controller:9696
...
root@controller:~# openstack endpoint create --region RegionOne network admin http://controller:9696
...
root@controller:~#
With all the services and endpoints defined in the Identity service, install the following packages:
File: gistfile1.txt
-------------------
root@controller:~# apt install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
The Neutron packages that we installed above, provide the following services:
· The neutron-server provides API to dynamically request and configure virtual networks.
· The neutron-plugin-ml2 is a framework that enables the use of various network technologies such as the Linux Bridge, Open vSwitch, GRE and VXLAN.
· The neutron-linuxbridge-agent provides the Linux bridge plugin agent.
· The neutron-l3-agent performs forwarding and NAT functionality between software defined networks, by creating virtual routers.
· The neutron-dhcp-agent controls the DHCP service that assigns IP addresses to the instances running on the compute nodes.
· The neutron-metadata-agent is a service that passes instance metadata to Neutron.
The following is a minimal working configuration file for Neutron:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
transport_url = rabbit://openstack:lxcpassword@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://neutron:lxcpassword@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = lxcpassword
[matchmaker_redis]
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne<
project_name = service
username = nova
password = lxcpassword
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq
[oslo_policy]
[qos]
[quotas]
[ssl]
root@controller:~#<
We need to define what network extension we are going to support and the type of network. All this information is going to be used when creating the LXC container and its configuration file, as we’ll see later:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = True
root@controller:~#
Define the interface that will be added to the software bridge and the IP the bridge will be bound to. In this case we are using the eth1 interface and its IP address:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth1
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = True
local_ip = 10.208.132.45
l2_population = True
root@controller:~#
We specify the bridge driver for the L3 agent as follows:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
[AGENT]
root@controller:~#
The configuration file for the DHCP agent should look similar to this:
root@controller:~# cat /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[AGENT]
root@controller:~#
The configuration file for the DHCP agent should look similar to this:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[AGENT]
root@controller:~#
And finally the configuration for the metadata agent is as the following:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = lxcpassword
[AGENT]
[cache]
root@controller:~#
We need to update the configuration file for the Nova services. The new complete files should look like this, replace the IP address as needed:
File: gistfile1.txt
-------------------
root@controller:~# cat /etc/nova/nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
log-dir=/var/log/nova
state_path=/var/lib/nova
force_dhcp_release=True
verbose=True
ec2_private_dns_show_ip=True
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:lxcpassword@controller
auth_strategy = keystone
my_ip = 10.208.132.45
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
[database]
connection = mysql+pymysql://nova:lxcpassword@controller/nova
[api_database]
connection = mysql+pymysql://nova:lxcpassword@controller/nova_api
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[libvirt]
use_virtio_for_bridges=True
[wsgi]
api_paste_config=/etc/nova/api-paste.ini
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = lxcpassword
[vnc]
enabled = True
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[libvirt]
virt_type = lxc
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = lxcpassword
service_metadata_proxy = True
metadata_proxy_shared_secret = lxcpassword
root@controller:~#
Populate the Neutron database:
File: gistfile1.txt
-------------------
root@controller:~# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file
...
OK
root@controller:~#
Finally, start all Networking services and restart nova-compute:
File: gistfile1.txt
-------------------
root@controller:~# service nova-api restart
root@controller:~# service neutron-server restart
root@controller:~# service neutron-linuxbridge-agent restart
root@controller:~# service neutron-dhcp-agent restart
root@controller:~# service neutron-metadata-agent restart
root@controller:~# service neutron-l3-agent restart
root@controller:~# service nova-compute restart
Lets verify the Neutron services are running:
File: gistfile1.txt
-------------------
root@controller:~# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 2a715a0d-5593-4aba-966e-6ae3b2e02ba2 | L3 agent | controller | nova | True | UP | neutron-l3-agent |
| 2ce176fb-dc2e-4416-bb47-1ae44e1f556f | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
| 42067496-eaa3-42ef-bff9-bbbbcbf2e15a | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
| fad2b9bb-8ee7-468e-b69a-43129338cbaa | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
root@controller:~#
Defining the LXC instance flavour, generating a key pair and creating security groups
Before we can create an LXC instance, we need to define its flavor – CPU, memory and disk size. The following creates a flavor named lxc.medium with 1 virtual CPU, 1GB RAM and 5GB disk:
File: gistfile1.txt
-------------------
root@controller:~# openstack flavor create --id 0 --vcpus 1 --ram 1024 --disk 5000 lxc.medium
+----------------------------+------------+
| Field | Value |
+----------------------------+------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 5000 |
| id | 0 |
| name | lxc.medium |
| os-flavor-access:is_public | True |
| properties | |
| ram | 1024 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+------------+
root@controller:~#
In order to SSH to the LXC containers we can have the SSH keys managed and installed during the instance provisioning, if we don’t want them to be baked inside the actual image. To generate the SSH key pair and add it to OpenStack run:
File: gistfile1.txt
-------------------
root@controller:~# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
root@controller:~# openstack keypair create --public-key ~/.ssh/id_rsa.pub lxckey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 84:36:93:cc:2f:f0:f7:ba:d5:73:54:ca:2e:f0:02:6d |
| name | lxckey |
| user_id | 3a04d141c07541478ede7ea34f3e5c36 |
+-------------+-------------------------------------------------+
root@controller:~#
To list the new key pair we just added execute:
File: gistfile1.txt
-------------------
root@controller:~# openstack keypair list
+--------+-------------------------------------------------+
| Name | Fingerprint |
+--------+-------------------------------------------------+
| lxckey | 84:36:93:cc:2f:f0:f7:ba:d5:73:54:ca:2e:f0:02:6d |
+--------+-------------------------------------------------+
root@controller:~#
By default, once a new LXC container is provisioned iptables will disallow access to it. Lets create two security groups that will allow ICMP and SSH, so we can test connectivity and connect to the instance:
File: gistfile1.txt
-------------------
root@controller:~# openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2016-12-02T20:30:14Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 0e17e0ab-4495-440a-b8b9-0a612f9eccae |
| port_range_max | None |
| port_range_min | None |
| project_id | 488aecf07dcb4ae6bc1ebad5b76fbc20 |
| project_id | 488aecf07dcb4ae6bc1ebad5b76fbc20 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | f21f3d3c-27fe-4668-bca4-6fc842dcb690 |
| updated_at | 2016-12-02T20:30:14Z |
+-------------------+--------------------------------------+
root@controller:~#
File: gistfile1.txt
-------------------
root@controller:~# openstack security group rule create --proto tcp --dst-port 22 default
...
root@controller:~#
Creating the networks
Let's start by creating a new network in Neutron called “nat”:
File: gistfile1.txt
-------------------
root@controller:~# openstack network create nat
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-12-02T20:32:53Z |
| description | |
| headers | |
| id | 66037974-d24b-4615-8b93-b0de18a4561b |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1450 |
| name | nat |
| port_security_enabled | True |
| project_id | 488aecf07dcb4ae6bc1ebad5b76fbc20 |
| project_id | 488aecf07dcb4ae6bc1ebad5b76fbc20 |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 53 |
| revision_number | 3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2016-12-02T20:32:53Z |
+---------------------------+--------------------------------------+
root@controller:~#
Next, define the DNS server, the default gateway and the subnet range that will be assigned to the LXC container:
File: gistfile1.txt
-------------------
root@controller:~# openstack subnet create --network nat --dns-nameserver 8.8.8.8 --gateway 192.168.0.1 --subnet-range 192.168.0.0/24 nat
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.0.2-192.168.0.254 |
| cidr | 192.168.0.0/24 |
| created_at | 2016-12-02T20:36:14Z |
| description | |
| dns_nameservers | 8.8.8.8 |
| enable_dhcp | True |
| gateway_ip | 192.168.0.1 |
| headers | |
| host_routes | |
| id | 0e65fa94-be69-4690-b3fe-406ea321dfb3 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | nat |
| network_id | 66037974-d24b-4615-8b93-b0de18a4561b |
| project_id | 488aecf07dcb4ae6bc1ebad5b76fbc20 |
| project_id | 488aecf07dcb4ae6bc1ebad5b76fbc20 |
| revision_number | 2 |
| service_types | [] |
| subnetpool_id | None |
| updated_at | 2016-12-02T20:36:14Z |
+-------------------+--------------------------------------+
root@controller:~#
Update subnet's information in Neutron:
File: gistfile1.txt
-------------------
root@controller:~# neutron net-update nat --router:external
Updated network: nat
root@controller:~#
As the lxc user, create a new software router:
File: gistfile1.txt
-------------------
root@controller:~# . rc.lxc
root@controller:~# openstack router create router
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-12-02T20:38:08Z |
| description | |
| external_gateway_info | null |
| flavor_id | None |
| headers | |
| id | 45557fac-f158-40ef-aeec-496de913d5a5 |
| name | router |
| project_id | b0cc5ccc12eb4d6b98aadd784540f575 |
| project_id | b0cc5ccc12eb4d6b98aadd784540f575 |
| revision_number | 2 |
| routes | |
| status | ACTIVE |
| updated_at | 2016-12-02T20:38:08Z |
+-------------------------+--------------------------------------+
root@controller:~#
As the admin user, add the subnet we created earlier as an interface to the router:
File: gistfile1.txt
-------------------
root@controller:~# . rc.admin
root@controller:~# neutron router-interface-add router nat
Added interface be0f1e65-f086-41fd-b8d0-45ebb865bf0f to router router.
root@controller:~#
Lets list the network namespaces that were created:
File: gistfile1.txt
-------------------
root@controller:~# ip netns
qrouter-45557fac-f158-40ef-aeec-496de913d5a5 (id: 1)
qdhcp-66037974-d24b-4615-8b93-b0de18a4561b (id: 0)
root@controller:~#
To show the ports on the software router and the default gateway for the LXC containers, run:
File: gistfile1.txt
-------------------
root@controller:~# neutron router-port-list router
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| be0f1e65-f086-41fd-b8d0-45ebb865bf0f | | fa:16:3e:a6:36:7c | {"subnet_id": "0e65fa94-be69-4690-b3fe-406ea321dfb3", "ip_address": "192.168.0.1"} |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
root@controller:~#
Provision LXC container with OpenStack
Before we launch our LXC container with OpenStack, lets double-check we have all the requirements in place.
Start by listing the available networks:
File: gistfile1.txt
-------------------
root@controller:~# openstack network list
+--------------------------------------+------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+------+--------------------------------------+
| 66037974-d24b-4615-8b93-b0de18a4561b | nat | 0e65fa94-be69-4690-b3fe-406ea321dfb3 |
+--------------------------------------+------+--------------------------------------+
root@controller:~#
Display the compute flavors we can choose from:
File: gistfile1.txt
-------------------
root@controller:~# openstack flavor list
+----+------------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+------------+------+------+-----------+-------+-----------+
| 0 | lxc.medium | 1024 | 5000 | 0 | 1 | True |
+----+------------+------+------+-----------+-------+-----------+
root@controller:~#
Next, list the available images:
File: gistfile1.txt
-------------------
root@controller:~# openstack image list
+--------------------------------------+------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------+--------+
| 417e72b5-7b85-4555-835d-ce442e21aa4f | lxc_ubuntu_12.04 | active |
+--------------------------------------+------------------+--------+
root@controller:~#
And display the default security group we created earlier:
File: gistfile1.txt
-------------------
root@controller:~# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| f21f3d3c-27fe-4668-bca4-6fc842dcb690 | default | Default security group | 488aecf07dcb4ae6bc1ebad5b76fbc20 |
+--------------------------------------+---------+------------------------+----------------------------------+
root@controller:~#
Time to load the Network Block Device kernel module, as Nova expects it:
File: gistfile1.txt
-------------------
root@controller:~# modprobe nbd
Finally, to provision LXC container with OpenStack, we execute:
File: gistfile1.txt
-------------------
root@controller:~# openstack server create --flavor lxc.medium --image lxc_ubuntu_12.04 --nic net-id=66037974-d24b-4615-8b93-b0de18a4561b --security-group default --key-name lxckey lxc_instance
...
root@controller:~#
Notice how we specified the instance flavor, the image name, and the id of the network, the security group, the key pair name and the name of the instance.
Make sure to replace the IDs with the output returned on your system.
To list the LXC container, its status and assigned IP address, run:
File: gistfile1.txt
-------------------
root@controller:~# openstack server list
+--------------------------------------+--------------+--------+-----------------+------------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+--------------+--------+-----------------+------------------+
| a86f8f56-80d7-4d36-86c3-827679f21ec5 | lxc_instance | ACTIVE | nat=192.168.0.3 | lxc_ubuntu_12.04 |
+--------------------------------------+--------------+--------+-----------------+------------------+
root@controller:~#
As we saw earlier in the post, OpenStack uses the libvirt driver to provision LXC containers. We can use the virsh command to list the LXC containers on the host:
File: gistfile1.txt
-------------------
root@controller:~# virsh --connect lxc:// list --all
Id Name State<br />
----------------------------------------------------
16225 instance-00000002 running
root@controller:~#
If we list the processes on the host, we can see the libvirt_lxc parent process spawned the init process for the container:
File: gistfile1.txt
-------------------
root@controller:~# ps axfw
...
16225 ? S 0:00 /usr/lib/libvirt/libvirt_lxc --name instance-00000002 --console 23 --security=apparmor --handshake 26 --veth vnet0
16227 ? Ss 0:00 \_ /sbin/init
16591 ? S 0:00 \_ upstart-socket-bridge --daemon
16744 ? Ss 0:00 \_ dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -1 eth0
17692 ? S 0:00 \_ upstart-udev-bridge --daemon
17696 ? Ss 0:00 \_ /sbin/udevd --daemon
17819 ? S 0:00 | \_ /sbin/udevd --daemon
18108 ? Ss 0:00 \_ /usr/sbin/sshd -D
18116 ? Ss 0:00 \_ dbus-daemon --system --fork --activation=upstart
18183 ? Ss 0:00 \_ cron
18184 ? Ss 0:00 \_ atd
18189 ? Ss 0:00 \_ /usr/sbin/irqbalance
18193 ? Ss 0:00 \_ acpid -c /etc/acpi/events -s /var/run/acpid.socket
18229 pts/0 Ss+ 0:00 \_ /sbin/getty -8 38400 tty1
18230 ? Ssl 0:00 \_ whoopsie
18317 ? Sl 0:00 \_ rsyslogd -c5
19185 ? Ss 0:00 \_ /sbin/getty -8 38400 tty4
19186 ? Ss 0:00 \_ /sbin/getty -8 38400 tty2
19187 ? Ss 0:00 \_ /sbin/getty -8 38400 tty3
root@controller:~#
The location of the containers configuration file and disk is located at:
File: gistfile1.txt
-------------------
root@controller:~# cd /var/lib/nova/instances
root@controller:/var/lib/nova/instances# ls -la a86f8f56-80d7-4d36-86c3-827679f21ec5/
total 12712
drwxr-xr-x 3 nova nova 4096 Dec 2 20:52 .
drwxr-xr-x 5 nova nova 4096 Dec 2 20:52 ..
-rw-r--r-- 1 nova nova 0 Dec 2 20:52 console.log
-rw-r--r-- 1 nova nova 13041664 Dec 2 20:57 disk
-rw-r--r-- 1 nova nova 79 Dec 2 20:52 disk.info
-rw-r--r-- 1 nova nova 1534 Dec 2 20:52 libvirt.xml
drwxr-xr-x 2 nova nova 4096 Dec 2 20:52 rootfs
root@controller:/var/lib/nova/instances#
Let's examine the container's configuration file:
File: gistfile1.txt
-------------------
root@controller:/var/lib/nova/instances# cat a86f8f56-80d7-4d36-86c3-827679f21ec5/libvirt.xml
<domain type="lxc">
<uuid>a86f8f56-80d7-4d36-86c3-827679f21ec5</uuid>
<name>instance-00000002</name>
<memory>1048576</memory>
<vcpu>1</vcpu>
<metadata>
<nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
<nova:package version="14.0.1">
<nova:name>lxc_instance</nova:name>
<nova:creationtime>2016-12-02 20:52:52</nova:creationtime>
<nova:flavor name="lxc.medium">
<nova:memory>1024</nova:memory>
<nova:disk>5000</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>1</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid="3a04d141c07541478ede7ea34f3e5c36">admin</nova:user>
<nova:project uuid="488aecf07dcb4ae6bc1ebad5b76fbc20">admin</nova:project>
</nova:owner>
<nova:root type="image" uuid="417e72b5-7b85-4555-835d-ce442e21aa4f">
</nova:root></nova:package></nova:instance>
</metadata>
<os>
<type>exe</type>
<cmdline>console=tty0 console=ttyS0 console=ttyAMA0</cmdline>
<init>/sbin/init</init>
</os>
<cputune>
<shares>1024</shares>
</cputune>
<clock offset="utc">
<devices>
<filesystem type="mount">
<source dir="/var/lib/nova/instances/a86f8f56-80d7-4d36-86c3-827679f21ec5/rootfs"></source>
<target dir="/">
</target></filesystem>
<interface type="bridge">
<mac address="fa:16:3e:4f:e5:b5">
<source bridge="brq66037974-d2"></source>
<target dev="tapf2073410-64">
</target></mac></interface>
<console type="pty">
</console></devices>
</clock></domain>
root@controller:/var/lib/nova/instances#
With the networking managed by Neutron, we should see the bridge and the containers interface added as a port:
File: gistfile1.txt
-------------------
root@controller:/var/lib/nova/instances# brctl show
bridge name bridge id STP enabled interfaces
brq66037974-d2 8000.02d65d01c617 no tap4e3afc26-88
tapbe0f1e65-f0
tapf2073410-64
vxlan-53
virbr0 8000.5254004e7712 yes virbr0-nic
root@controller:/var/lib/nova/instances#
Let's configure an IP address on the bridge interface and allow NAT connectivity to the container:
File: gistfile1.txt
-------------------
root@controller:/var/lib/nova/instances# ifconfig brq66037974-d2 192.168.0.1
root@controller:~# iptables -A POSTROUTING -t nat -s 192.168.0.0/24 ! -d 192.168.0.0/24 -j MASQUERADE
root@controller:~#
To connect to the LXC container using SSH and the key pair we generated earlier, execute:
File: gistfile1.txt
-------------------
root@controller:/var/lib/nova/instances# ssh ubuntu@192.168.0.3
Welcome to Ubuntu 12.04.5 LTS (GNU/Linux 4.4.0-51-generic x86_64)
ubuntu@lxc-instance:~$ ifconfig
eth0 Link encap:Ethernet HWaddr fa:16:3e:4f:e5:b5
inet addr:192.168.0.3 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe4f:e5b5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:290 errors:0 dropped:0 overruns:0 frame:0
TX packets:340 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:35038 (35.0 KB) TX bytes:36830 (36.8 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ubuntu@lxc-instance:~$ exit
logout
Connection to 192.168.0.3 closed.
root@controller:/var/lib/nova/instances# cd
root@controller:~#
Finally, to delete the LXC container using OpenStack run:
File: gistfile1.txt
-------------------
root@controller:~# openstack server delete lxc_instance
root@controller:~# openstack server list
root@controller:~#