I was finally able to successfully deploy and run OpenStack Folsom on a single physical server for testing. It was a somewhat painful process as so many things can go wrong - there are just so many moving parts. I was lucky enough to attend a two day class on OpenStack that really helped [1].
This post will demonstrate how to go about installing and configuring OpenStack on a single node. At the end you'll be able to setup networking and block storage and create VM's.
Just as a brief overview of OpenStack here are all the parts that I've used [2]:
Object Store (codenamed "Swift") provides object storage. It allows you to store or retrieve files (but not mount directories like a fileserver). In this tutorial I'll not be using it, but I'll write a new one that will only deal with Swift, as it's a beast on it's own.
Image (codenamed "Glance") provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute.
Compute (codenamed "Nova") provides virtual servers upon demand - KVM, XEN, LXC, etc.
Identity (codenamed "Keystone") provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud.
Network (codenamed "Quantum") provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies. One example is OpenVSwitch which I'll use in this setup.
Block Storage (codenamed "Cinder") provides persistent block storage to guest VMs. This project was born from code originally in Nova (the nova-volume service). In the Folsom release, both the nova-volume service and the separate volume service are available. I'll use iSCSI over LVM to export a block device.
Dashboard (codenamed "Horizon") provides a modular web-based user interface for all the OpenStack services written in Django. With this web GUI, you can perform some operations on your cloud like launching an instance, assigning IP addresses and setting access controls.
Here's a conceptual diagram for OpenStack Folsom and how all the pieces fit together:
And here's the logical architecture:
For this example deployment I'll be using a single physical Ubuntu 12.04 server with hvm support enabled in the BIOS.
1. Prerequisites
Make sure you have the correct repository from which to download all OpenStack components:
As root run:
File: gistfile1.sh
------------------
[root@folsom:~]# apt-get install ubuntu-cloud-keyring
[root@folsom:~]# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main" >> /etc/apt/sources.list
[root@folsom:~]# apt-get update && apt-get upgrade
[root@folsom:~]# reboot
When the server comes back online execute (replace MY_IP with your IP address):
File: gistfile1.sh
------------------
[root@folsom:~]# useradd -s /bin/bash -m openstack
[root@folsom:~]# echo "%openstack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
[root@folsom:~]# su - openstack
[openstack@folsom:~]$ export MY_IP=10.177.129.121
Preseed the MySQL install
File: gistfile1.sh
------------------
[openstack@folsom:~]$ cat <<EOF | sudo debconf-set-selections
mysql-server-5.1 mysql-server/root_password password notmysql
mysql-server-5.1 mysql-server/root_password_again password notmysql
mysql-server-5.1 mysql-server/start_on_boot boolean true
EOF
Install packages and dependencies
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y rabbitmq-server mysql-server python-mysqldb
Configure MySQL to listen on all interfaces
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
[openstack@folsom:~]$ sudo service mysql restart
Synchronize date
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ntpdate -u ntp.ubuntu.com
>
2. Installing the identity service - Keystone
File: gistfile1.sh
------------------
sudo apt-get install -y keystone
Create a database for keystone
File: gistfile1.sh
------------------
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE keystone;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'notkeystone';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'notkeystone';"
Configure keystone to use MySQL
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s|connection = sqlite:////var/lib/keystone/keystone.db|connection = mysql://keystone:notkeystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf
Restart keystone service
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service keystone restart
Verify keystone service successfully restarted
File: gistfile1.sh
------------------
[openstack@folsom:~]$ pgrep -l keystone
Initialize the database schema
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo -u keystone keystone-manage db_sync
Add the 'keystone admin' credentials to .bashrc
File: gistfile1.sh
------------------
[openstack@folsom:~]$ cat >> ~/.bashrc <<EOF
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=http://$MY_IP:35357/v2.0
EOF
Use the 'keystone admin' credentials
File: gistfile1.sh
------------------
[openstack@folsom:~]$ source ~/.bashrc
Create new tenants (The services tenant will be used later when configuring services to use keystone)
File: gistfile1.sh
------------------
[openstack@folsom:~]$ TENANT_ID=`keystone tenant-create --name MyProject | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ SERVICE_TENANT_ID=`keystone tenant-create --name Services | awk '/ id / { print $4 }'`
Create new roles
File: gistfile1.sh
------------------
[openstack@folsom:~]$ MEMBER_ROLE_ID=`keystone role-create --name member | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ ADMIN_ROLE_ID=`keystone role-create --name admin | awk '/ id / { print $4 }'`
Create new users
File: gistfile1.ps1
-------------------
[openstack@folsom:~]$ MEMBER_USER_ID=`keystone user-create --tenant-id $TENANT_ID --name myuser --pass mypassword | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ ADMIN_USER_ID=`keystone user-create --tenant-id $TENANT_ID --name myadmin --pass mypassword | awk '/ id / { print $4 }'`
Grant roles to users
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-role-add --user-id $MEMBER_USER_ID --tenant-id $TENANT_ID --role-id $MEMBER_ROLE_ID
[openstack@folsom:~]$ keystone user-role-add --user-id $ADMIN_USER_ID --tenant-id $TENANT_ID --role-id $ADMIN_ROLE_ID
List the new tenant, users, roles, and role assigments
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone tenant-list
[openstack@folsom:~]$ keystone role-list
[openstack@folsom:~]$ keystone user-list --tenant-id $TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $MEMBER_USER_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $ADMIN_USER_ID
Populate the services in the service catalog
File: gistfile1.sh
------------------
[openstack@folsom:~]$ KEYSTONE_SVC_ID=`keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ GLANCE_SVC_ID=`keystone service-create --name=glance --type=image --description="Glance Image Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ QUANTUM_SVC_ID=`keystone service-create --name=quantum --type=network --description="Quantum Network Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ NOVA_SVC_ID=`keystone service-create --name=nova --type=compute --description="Nova Compute Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ CINDER_SVC_ID=`keystone service-create --name=cinder --type=volume --description="Cinder Volume Service" | awk '/ id / { print $4 }'`
List the new services
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone service-list
Populate the endpoints in the service catalog
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$KEYSTONE_SVC_ID --publicurl=http://$MY_IP:5000/v2.0 --internalurl=http://$MY_IP:5000/v2.0 --adminurl=http://$MY_IP:35357/v2.0
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$GLANCE_SVC_ID --publicurl=http://$MY_IP:9292/v1 --internalurl=http://$MY_IP:9292/v1 --adminurl=http://$MY_IP:9292/v1
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$QUANTUM_SVC_ID --publicurl=http://$MY_IP:9696/ --internalurl=http://$MY_IP:9696/ --adminurl=http://$MY_IP:9696/
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$NOVA_SVC_ID --publicurl="http://$MY_IP:8774/v2/%(tenant_id)s" --internalurl="http://$MY_IP:8774/v2/%(tenant_id)s" --adminurl="http://$MY_IP:8774/v2/%(tenant_id)s"
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$CINDER_SVC_ID --publicurl="http://$MY_IP:8776/v1/%(tenant_id)s" --internalurl="http://$MY_IP:8776/v1/%(tenant_id)s" --adminurl="http://$MY_IP:8776/v1/%(tenant_id)s"
List the new endpoints
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone endpoint-list
Verify identity service is functioning
File: gistfile1.sh
------------------
[openstack@folsom:~]$ curl -d '{"auth": {"tenantName": "MyProject", "passwordCredentials": {"username": "myuser", "password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool
[openstack@folsom:~]$ curl -d '{"auth": {"tenantName": "MyProject", "passwordCredentials": {"username": "myadmin", "password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool
Create the 'user' and 'admin' credentials
File: gistfile1.sh
------------------
[openstack@folsom:~]$ mkdir ~/credentials
[openstack@folsom:~]$ cat >> ~/credentials/user <<EOF
export OS_USERNAME=myuser
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF
[openstack@folsom:~]$ cat >> ~/credentials/admin <<EOF
export OS_USERNAME=myadmin
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF
Use the 'user' credentials
File: gistfile1.sh
------------------
[openstack@folsom:~]$ source ~/credentials/user
3. Install the image service - Glance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y glance
Create glance service user in the services tenant
File: gistfile1.sh
------------------
[openstack@folsom:~]$ GLANCE_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name glance --pass notglance | awk '/ id / { print $4 }'`
Grant admin role to glance service user
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-role-add --user-id $GLANCE_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
List the new user and role assigment
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $GLANCE_USER_ID
Create a database for glance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE glance;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'notglance';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'notglance';"
Configure the glance-api service
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/glance/glance.sqlite|sql_connection = mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g' /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g' /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g' /etc/glance/glance-api.conf
Configure the glance-registry service
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/glance/glance.sqlite|sql_connection = mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g' /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g' /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g' /etc/glance/glance-registry.conf
Restart glance services
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service glance-registry restart
[openstack@folsom:~]$ sudo service glance-api restart
Verify glance services successfully restarted
File: gistfile1.sh
------------------
[openstack@folsom:~]$ pgrep -l glance
Initialize the database schema. Ignore the deprecation warning.
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo -u glance glance-manage db_sync
Download some images
File: gistfile1.sh
------------------
[openstack@folsom:~]$ mkdir ~/images
[openstack@folsom:~]$ wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img -O ~/images/cirros-0.3.0-x86_64-disk.img
Register a qcow2 image
File: gistfile1.sh
------------------
[openstack@folsom:~]$ IMAGE_ID_1=`glance image-create --name "cirros-qcow2" --disk-format qcow2 --container-format bare --is-public True --file ~/images/cirros-0.3.0-x86_64-disk.img | awk '/ id / { print $4 }'`
Verify the images exist in glance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ glance image-list
# Examine details of images
File: gistfile1.sh
------------------
[openstack@folsom:~]$ glance image-show $IMAGE_ID_1
4. Install the network service - Quantum
Install dependencies
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y openvswitch-switch
Install the network service
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y quantum-server quantum-plugin-openvswitch
Install the network service agents
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent
Create a database for quantum
File: gistfile1.sh
------------------
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE quantum;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY 'notquantum';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON quantum.* TO 'quantum'@'%' IDENTIFIED BY 'notquantum';"
Configure the quantum OVS plugin
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/quantum/ovs.sqlite|sql_connection = mysql://quantum:notquantum@$MY_IP/quantum|g" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i 's/# Default: enable_tunneling = False/enable_tunneling = True/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i 's/# Example: tenant_network_type = gre/tenant_network_type = gre/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i 's/# Example: tunnel_id_ranges = 1:1000/tunnel_id_ranges = 1:1000/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i "s/# Default: local_ip = 10.0.0.3/local_ip = 192.168.1.$MY_NODE_ID/g" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
Create quantum service user in the services tenant
File: gistfile1.sh
------------------
[openstack@folsom:~]$ QUANTUM_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name quantum --pass notquantum | awk '/ id / { print $4 }'`
Grant admin role to quantum service user
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-role-add --user-id $QUANTUM_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
List the new user and role assigment
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $QUANTUM_USER_ID
Configure the quantum service to use keystone
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g' /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g' /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/# auth_strategy = keystone/auth_strategy = keystone/g' /etc/quantum/quantum.conf
Configure the L3 agent to use keystone
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s|auth_url = http://localhost:35357/v2.0|auth_url = http://$MY_IP:35357/v2.0|g" /etc/quantum/l3_agent.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/quantum/l3_agent.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g' /etc/quantum/l3_agent.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g' /etc/quantum/l3_agent.ini
Start Open vSwitch
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service openvswitch-switch restart
Create the integration and external bridges
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ovs-vsctl add-br br-int
[openstack@folsom:~]$ sudo ovs-vsctl add-br br-ex
Restart the quantum services
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service quantum-server restart
[openstack@folsom:~]$ sudo service quantum-plugin-openvswitch-agent restart
[openstack@folsom:~]$ sudo service quantum-dhcp-agent restart
[openstack@folsom:~]$ sudo service quantum-l3-agent restart
Create a network and subnet
File: gistfile1.sh
------------------
[openstack@folsom:~]$ PRIVATE_NET_ID=`quantum net-create private | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ PRIVATE_SUBNET1_ID=`quantum subnet-create --name private-subnet1 $PRIVATE_NET_ID 10.0.0.0/29 | awk '/ id / { print $4 }'`
List network and subnet
File: gistfile1.sh
------------------
[openstack@folsom:~]$ quantum net-list
[openstack@folsom:~]$ quantum subnet-list
Examine details of network and subnet
File: gistfile1.sh
------------------
[openstack@folsom:~]$ quantum net-show $PRIVATE_NET_ID
[openstack@folsom:~]$ quantum subnet-show $PRIVATE_SUBNET1_ID
To add public connectivity to your VM's perform the following:
Bring up eth1
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ip link set dev eth1 up
Attach eth1 to br-ex
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ovs-vsctl add-port br-ex eth1
[openstack@folsom:~]$ sudo ovs-vsctl show
As the admin user for Quantum create a provider owned network and subnet and set the MY_PUBLIC_SUBNET_CIDR to your public CIDR
File: gistfile1.sh
------------------
[openstack@folsom:~]$ source ~/credentials/quantum
[openstack@folsom:~]$ echo $MY_PUBLIC_SUBNET_CIDR
[openstack@folsom:~]$ PUBLIC_NET_ID=`quantum net-create public --router:external=True | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ PUBLIC_SUBNET_ID=`quantum subnet-create --name public-subnet $PUBLIC_NET_ID $MY_PUBLIC_SUBNET_CIDR -- --enable_dhcp=False | awk '/ id / { print $4 }'`
Switch back to the 'user' credentials
File: gistfile1.sh
------------------
[openstack@folsom:~]$ source ~/credentials/user
Connect the router to the public network
File: gistfile1.sh
------------------
[openstack@folsom:~]$ quantum router-gateway-set $ROUTER_ID $PUBLIC_NET_ID
Exmaine details of router
File: gistfile1.sh
------------------
[openstack@folsom:~]$ quantum router-show $ROUTER_ID
Get instance ID for MyInstance1
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova show MyInstance1
INSTANCE_ID=(instance id of your vm)
Find the port id for instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ INSTANCE_PORT_ID=`quantum port-list -f csv -c id -- --device_id=$INSTANCE_ID | awk 'END{print};{gsub(/[\"\r]/,"")}'`
Create a floating IP and attach it to instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ quantum floatingip-create --port_id=$INSTANCE_PORT_ID $PUBLIC_NET_ID
5. Install the compute service - Nova
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y nova-api nova-scheduler nova-compute nova-cert nova-consoleauth genisoimage
Create nova service user in the services tenant
File: gistfile1.sh
------------------
[openstack@folsom:~]$ NOVA_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name nova --pass notnova | awk '/ id / { print $4 }'`
Grant admin role to nova service user
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-role-add --user-id $NOVA_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
List the new user and role assigment
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $NOVA_USER_ID
Create a database for nova
File: gistfile1.sh
------------------
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE nova;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'notnova';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'notnova';"
Configure nova
File: gistfile1.sh
------------------
[openstack@folsom:~]$ cat <<EOF | sudo tee -a /etc/nova/nova.conf
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://$MY_IP:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=Services
quantum_admin_username=quantum
quantum_admin_password=notquantum
quantum_admin_auth_url=http://$MY_IP:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
sql_connection=mysql://nova:notnova@$MY_IP/nova
auth_strategy=keystone
my_ip=$MY_IP
force_config_drive=True
EOF
Disable verbose logging
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i 's/verbose=True/verbose=False/g' /etc/nova/nova.conf
Configure nova to use keystone
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/nova/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/nova/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/nova/g' /etc/nova/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notnova/g' /etc/nova/api-paste.ini
Initialize the nova database
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo -u nova nova-manage db sync
Restart nova services
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service nova-api restart
[openstack@folsom:~]$ sudo service nova-scheduler restart
[openstack@folsom:~]$ sudo service nova-compute restart
[openstack@folsom:~]$ sudo service nova-cert restart
[openstack@folsom:~]$ sudo service nova-consoleauth restart
Verify nova services successfully restarted
File: gistfile1.sh
------------------
[openstack@folsom:~]$ pgrep -l nova
Verify nova services are functioning
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo nova-manage service list
List images
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova image-list
List flavors
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova flavor-list
Boot an instance using flavor and image names (if names are unique)
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyFirstInstance
Boot an instance using flavor and image IDs
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova boot --image $IMAGE_ID_1 --flavor 1 MySecondInstance
List instances, notice status of instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova list
Show details of instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova show MyFirstInstance
View console log of instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova console-log MyFirstInstance
Get network namespace (ie, qdhcp-5ab46e23-118a-4cad-9ca8-51d56a5b6b8c)
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ip netns
[openstack@folsom:~]$ NETNS_ID=qdhcp-$PRIVATE_NET_ID
Ping first instance after status is active
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ping -c 3 10.0.0.3
Log into first instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh cirros@10.0.0.3
If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3
Ping second instance from first instance
File: gistfile1.sh
------------------
[openstack@host1:~]$ ping -c 3 10.0.0.4
Log into second instance from first instance
File: gistfile1.sh
------------------
[openstack@host1:~]$ ssh cirros@10.0.0.4
Log out of second instance
File: gistfile1.sh
------------------
[openstack@host2:~]$ exit
Log out of first instance
File: gistfile1.sh
------------------
[openstack@host1:~]$ exit
Use virsh to talk directly to libvirt
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo virsh list --all
Delete instances
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova delete MyFirstInstance
[openstack@folsom:~]$ nova delete MySecondInstance
List instances, notice status of instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova list
To start a LXC container do the following:
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install nova-compute-lxc lxctl
[openstack@folsom:~]$ sudo echo "compute_driver=libvirt.LibvirtDriver" >> /etc/nova/nova.conf
[openstack@folsom:~]$ sudo echo "libvirt_type=lxc" >> /etc/nova/nova.conf
[openstack@folsom:~]$ sudo cat /etc/nova/nova-compute.conf
[DEFAULT]
libvirt_type=lxc
You need to use a raw image:
File: gistfile1.sh
------------------
[openstack@folsom:~]$ wget http://uec-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz -O images/ubuntu-12.04-server-cloudimg-amd64.tar.gz
[openstack@folsom:~]$ cd images; tar zxfv ubuntu-12.04-server-cloudimg-amd64.tar.gz; cd ..
[openstack@folsom:~]$ glance image-create --name "UbuntuLXC" --disk-format raw --container-format bare --is-public True --file images/precise-server-cloudimg-amd64.img
[openstack@folsom:~]$ glance image-update UbuntuLXC --property hypervisor_type=lxc
Now you can start the LXC container with nova:
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova boot --image UbuntuLXC --flavor m1.tiny LXC
The instance files and rootfs will be located in /var/lib/nova/instances.
Logs go to /var/log/nova/nova-compute.log.
VNC does not work with LXC, but the console and ssh does.
6. Install the dashboard - Horizon
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y memcached novnc
[openstack@folsom:~]$ sudo apt-get install -y --no-install-recommends openstack-dashboard nova-novncproxy
Configure nova for VNC
File: gistfile1.sh
------------------
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
novncproxy_base_url=http://$MY_IP:6080/vnc_auto.html
vncserver_proxyclient_address=$MY_IP
vncserver_listen=0.0.0.0
EOF
Set default role
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i 's/OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"/OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"/g' /etc/openstack-dashboard/local_settings.py
Restart the nova services
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service nova-api restart
[openstack@folsom:~]$ sudo service nova-scheduler restart
[openstack@folsom:~]$ sudo service nova-compute restart
[openstack@folsom:~]$ sudo service nova-cert restart
[openstack@folsom:~]$ sudo service nova-consoleauth restart
[openstack@folsom:~]$ sudo service nova-novncproxy restart
[openstack@folsom:~]$ sudo service apache2 restart
Point your browser to http://$MY_IP/horizon.
The credentials that we've create earlier are myadmin/mypassword.
7. Install the volume service - Cinder
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo apt-get install -y cinder-api cinder-scheduler cinder-volume
Create cinder service user in the services tenant
File: gistfile1.sh
------------------
[openstack@folsom:~]$ CINDER_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name cinder --pass notcinder | awk '/ id / { print $4 }'`
Grant admin role to cinder service user
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-role-add --user-id $CINDER_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
List the new user and role assigment
File: gistfile1.sh
------------------
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $CINDER_USER_ID
Create a database for cinder
File: gistfile1.sh
------------------
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE cinder;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'notcinder';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'notcinder';"
Configure cinder
File: gistfile1.sh
------------------
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/cinder/cinder.conf ) <<EOF
sql_connection = mysql://cinder:notcinder@$MY_IP/cinder
my_ip = $MY_IP
EOF
Configure cinder-api to use keystone
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo sed -i "s/service_host = 127.0.0.1/service_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/admin_tenant_name = %SERVICE_TENANT_NAME%/admin_tenant_name = Services/g' /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/admin_user = %SERVICE_USER%/admin_user = cinder/g' /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/admin_password = %SERVICE_PASSWORD%/admin_password = notcinder/g' /etc/cinder/api-paste.ini
Initialize the database schema
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo -u cinder cinder-manage db sync
Configure nova to use cinder
File: gistfile1.sh
------------------
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
volume_manager=cinder.volume.manager.VolumeManager
volume_api_class=nova.volume.cinder.API
enabled_apis=osapi_compute,metadata
EOF
Restart nova-api to disable the nova-volume api (osapi_volume)
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service nova-api restart
[openstack@folsom:~]$ sudo service nova-scheduler restart
[openstack@folsom:~]$ sudo service nova-compute restart
[openstack@folsom:~]$ sudo service nova-cert restart
[openstack@folsom:~]$ sudo service nova-consoleauth restart
[openstack@folsom:~]$ sudo service nova-novncproxy restart
Configure tgt
File: gistfile1.sh
------------------
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/tgt/targets.conf ) <<EOF
default-driver iscsi
EOF
Restart tgt and open-iscsi
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service tgt restart
[openstack@folsom:~]$ sudo service open-iscsi restart
Create the volume group
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo pvcreate /dev/sda4
[openstack@folsom:~]$ sudo vgcreate cinder-volumes /dev/sda4
Verify the volume group
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo vgdisplay
Restart the volume services
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo service cinder-volume restart
[openstack@folsom:~]$ sudo service cinder-scheduler restart
[openstack@folsom:~]$ sudo service cinder-api restart
Create a new volume
File: gistfile1.sh
------------------
[openstack@folsom:~]$ cinder create 1 --display-name MyFirstVolume
Boot an instance to attach volume to
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyVolumeInstance
List instances, notice status of instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova list
List volumes, notice status of volume
File: gistfile1.sh
------------------
[openstack@folsom:~]$ cinder list
Attach volume to instance after instance is active, and volume is available
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova volume-attach <instance-id> <volume-id> /dev/vdc
Log into first instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh cirros@10.0.0.3
If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3
Make filesystem on volume
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo mkfs.ext3 /dev/vdc
Create a mountpoint
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo mkdir /extraspace
Mount the volume at the mountpoint
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo mount /dev/vdc /extraspace
Create a file on the volume
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo touch /extraspace/helloworld.txt
[openstack@folsom:~]$ sudo ls /extraspace
Unmount the volume
File: gistfile1.sh
------------------
[openstack@folsom:~]$ sudo umount /extraspace
Log out of instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ exit
Detach volume from instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova volume-detach <instance-id> <volume-id>
List volumes, notice status of volume
File: gistfile1.sh
------------------
[openstack@folsom:~]$ cinder list
Delete instance
File: gistfile1.sh
------------------
[openstack@folsom:~]$ nova delete MyVolumeInstance
Resources:
[1] http://www.rackspace.com/cloud/private/training/
[2] http://docs.openstack.org/folsom/