文档章节

OpenStack Grizzly Install Guide

蓝狐乐队
 蓝狐乐队
发布于 2014/04/18 17:49
字数 2722
阅读 91
收藏 0

Table of Contents0. What is it?1. Requirements2. Controller Node3. Network Node4. Compute Node5. Your first VM6. Licensing7. Contacts8. Credits9. To do

0. What is it?

OpenStack Grizzly Install Guide is an easy and tested way to create your own OpenStack platform.
If you like it, don't forget to star it !
Status: Stable


1. Requirements

Node Role: NICs
Control Node: eth0 (10.10.10.51), eth1 (192.168.100.51)
Network Node: eth0 (10.10.10.52), eth1 (10.20.20.52), eth2 (192.168.100.52)
Compute Node: eth0 (10.10.10.53), eth1 (10.20.20.53)
Note 1: Always use dpkg -s <packagename> to make sure you are using grizzly packages (version : 2013.1)
Note 2: This is my current network architecture, you can add as many compute node as you wish.


2. Controller Node2.1. Preparing Ubuntu

  • After you install Ubuntu 12.04 or 13.04 Server 64bits, Go in sudo mode and don't leave it until the end of this guide:
    sudo su
  • Add Grizzly repositories [Only for Ubuntu 12.04]:
    apt-get install -y ubuntu-cloud-keyringecho deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list
  • Update your system:
    apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y


2.2. Networking

  • Only one NIC should have an internet access:
    #For Exposing OpenStack API over the internetauto eth1iface eth1 inet staticaddress 192.168.100.51netmask 255.255.255.0gateway 192.168.100.1dns-nameservers 8.8.8.8#Not internet connected(used for OpenStack management)auto eth0iface eth0 inet staticaddress 10.10.10.51netmask 255.255.255.0
  • Restart the networking service:
    service networking restart


2.3. MySQL & RabbitMQ

  • Install MySQL:
    apt-get install -y mysql-server python-mysqldb
  • Configure mysql to accept all incoming requests:
    sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnfservice mysql restart


2.4. RabbitMQ

  • Install RabbitMQ:
    apt-get install -y rabbitmq-server
  • Install NTP service:
    apt-get install -y ntp
  • Create these databases:
    mysql -u root -p#KeystoneCREATE DATABASE keystone;GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';#GlanceCREATE DATABASE glance;GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';#QuantumCREATE DATABASE quantum;GRANT ALL ON quantum.* TO 'quantumUser'@'%' IDENTIFIED BY 'quantumPass';#NovaCREATE DATABASE nova;GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';#CinderCREATE DATABASE cinder;GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass';quit;


2.5. Others

  • Install other services:
    apt-get install -y vlan bridge-utils
  • Enable IP_Forwarding:
    sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf# To save you from rebooting, perform the followingsysctl net.ipv4.ip_forward=1


2.6. Keystone

  • Start by the keystone packages:
    apt-get install -y keystone
  • Adapt the connection attribute in the /etc/keystone/keystone.conf to the new database:
    connection = mysql://keystoneUser:keystonePass@10.10.10.51/keystone
  • Restart the identity service then synchronize the database:
    service keystone restartkeystone-manage db_sync
  • Fill up the keystone database using the two scripts available in the Scripts folder of this git repository:
    #Modify the **HOST_IP** and **EXT_HOST_IP** variables before executing the scriptswget https://raw.github.com/mseknibil ... ystone_basic.shwget https://raw.github.com/mseknibil ... oints_basic.shchmod +x keystone_basic.shchmod +x keystone_endpoints_basic.sh./keystone_basic.sh./keystone_endpoints_basic.sh
  • Create a simple credential file and load it so you won't be bothered later:
    nano creds#Paste the following:export OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=admin_passexport OS_AUTH_URL="http://192.168.100.51:5000/v2.0/"# Load it:source creds
  • To test Keystone, we use a simple CLI command:
    keystone user-list


2.7. Glance

  • We Move now to Glance installation:
    apt-get install -y glance
  • Update /etc/glance/glance-api-paste.ini with:
    [filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factorydelay_auth_decision = trueauth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = glanceadmin_password = service_pass
  • Update the /etc/glance/glance-registry-paste.ini with:
    [filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = glanceadmin_password = service_pass
  • Update /etc/glance/glance-api.conf with:
    sql_connection = mysql://glanceUser:glancePass@10.10.10.51/glance
  • And:
    [paste_deploy]flavor = keystone
  • Update the /etc/glance/glance-registry.conf with:
    sql_connection = mysql://glanceUser:glancePass@10.10.10.51/glance
  • And:
    [paste_deploy]flavor = keystone
  • Restart the glance-api and glance-registry services:
    service glance-api restart; service glance-registry restart
  • Synchronize the glance database:
    glance-manage db_sync
  • To test Glance, upload the cirros cloud image directly from the internet:
    glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 --location http://download.cirros-cloud.net ... 3.1-x86_64-disk.img
  • Now list the image to see what you have just uploaded:
    glance image-list


2.8. Quantum

  • Install the Quantum server and the OpenVSwitch package collection:
    apt-get install -y quantum-server
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
    #Under the database section[DATABASE]sql_connection = mysql://quantumUser:quantumPass@10.10.10.51/quantum#Under the OVS section[OVS]tenant_network_type = gretunnel_id_ranges = 1:1000enable_tunneling = True#Firewall driver for realizing quantum security group function[SECURITYGROUP]firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  • Edit /etc/quantum/api-paste.ini
    [filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_pass
  • Update the /etc/quantum/quantum.conf:
    [keystone_authtoken]auth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_passsigning_dir = /var/lib/quantum/keystone-signing
  • Restart the quantum server:
    service quantum-server restart


2.9. Nova

  • Start by installing nova components:
    apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
    [filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = novaadmin_password = service_passsigning_dirname = /tmp/keystone-signing-nova# Workaround for https://bugs.launchpad.net/nova/+bug/1154809auth_version = v2.0
  • Modify the /etc/nova/nova.conf like this:
    [DEFAULT]logdir=/var/log/novastate_path=/var/lib/novalock_path=/run/lock/novaverbose=Trueapi_paste_config=/etc/nova/api-paste.inicompute_scheduler_driver=nova.scheduler.simple.SimpleSchedulerrabbit_host=10.10.10.51nova_url=http://10.10.10.51:8774/v1.1/sql_connection=mysql://novaUser:novaPass@10.10.10.51/novaroot_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf# Authuse_deprecated_auth=falseauth_strategy=keystone# Imaging serviceglance_api_servers=10.10.10.51:9292image_service=nova.image.glance.GlanceImageService# Vnc configurationnovnc_enabled=truenovncproxy_base_url=http://192.168.100.51:6080/vnc_auto.htmlnovncproxy_port=6080vncserver_proxyclient_address=10.10.10.51vncserver_listen=0.0.0.0# Network settingsnetwork_api_class=nova.network.quantumv2.api.APIquantum_url=http://10.10.10.51:9696quantum_auth_strategy=keystonequantum_admin_tenant_name=servicequantum_admin_username=quantumquantum_admin_password=service_passquantum_admin_auth_url=http://10.10.10.51:35357/v2.0libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriverlinuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver#If you want Quantum + Nova Security groupsfirewall_driver=nova.virt.firewall.NoopFirewallDriversecurity_group_api=quantum#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver#Metadataservice_quantum_metadata_proxy = Truequantum_metadata_proxy_shared_secret = helloOpenStack# Compute #compute_driver=libvirt.LibvirtDriver# Cinder #volume_api_class=nova.volume.cinder.APIosapi_volume_listen_port=5900
  • Synchronize your database:
    nova-manage db sync
  • Restart nova-* services:
    cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
  • Check for the smiling faces on nova-* services to confirm your installation:
    nova-manage service list


2.10. Cinder

  • Install the required packages:
    apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms
  • Configure the iscsi services:
    sed -i 's/false/true/g' /etc/default/iscsitarget
  • Restart the services:
    service iscsitarget startservice open-iscsi start
  • Configure /etc/cinder/api-paste.ini like the following:
    [filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryservice_protocol = httpservice_host = 192.168.100.51service_port = 5000auth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = cinderadmin_password = service_passsigning_dir = /var/lib/cinder
  • Edit the /etc/cinder/cinder.conf to:
    [DEFAULT]rootwrap_config=/etc/cinder/rootwrap.confsql_connection = mysql://cinderUser:cinderPass@10.10.10.51/cinderapi_paste_config = /etc/cinder/api-paste.iniiscsi_helper=ietadmvolume_name_template = volume-%svolume_group = cinder-volumesverbose = Trueauth_strategy = keystoneiscsi_ip_address=10.10.10.51
  • Then, synchronize your database:
    cinder-manage db sync
  • Finally, don't forget to create a volumegroup and name it cinder-volumes:
    dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2Glosetup /dev/loop2 cinder-volumesfdisk /dev/loop2#Type in the followings:np1ENTERENTERt8ew
  • Proceed to create the physical volume then the volume group:
    pvcreate /dev/loop2vgcreate cinder-volumes /dev/loop2
Note: Beware that this volume group gets lost after a system reboot. (Click Here to know how to load it after a reboot)
  • Restart the cinder services:
    cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
  • Verify if cinder services are running:
    cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done


2.11. Horizon

  • To install horizon, proceed like this
    apt-get install -y openstack-dashboard memcached
  • If you don't like the OpenStack ubuntu theme, you can remove the package to disable it:
    dpkg --purge openstack-dashboard-ubuntu-theme
  • Reload Apache and memcached:
    service apache2 restart; service memcached restart
  • Check OpenStack Dashboard at http://192.168.100.51/horizon. We can login with the admin / admin_pass



3. Network Node3.1. Preparing the Node

  • After you install Ubuntu 12.04 or 13.04 Server 64bits, Go in sudo mode:
    sudo su
  • Add Grizzly repositories [Only for Ubuntu 12.04]:
    apt-get install -y ubuntu-cloud-keyringecho deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list
  • Update your system:
    apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y
  • Install ntp service:
    apt-get install -y ntp
  • Configure the NTP server to follow the controller node:
    #Comment the ubuntu NTP serverssed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf#Set the network node to follow up your conroller nodesed -i 's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.confservice ntp restart
  • Install other services:
    apt-get install -y vlan bridge-utils
  • Enable IP_Forwarding:
    sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf# To save you from rebooting, perform the followingsysctl net.ipv4.ip_forward=1


3.2.Networking

  • 3 NICs must be present:
    # OpenStack managementauto eth0iface eth0 inet staticaddress 10.10.10.52netmask 255.255.255.0# VM Configurationauto eth1iface eth1 inet staticaddress 10.20.20.52netmask 255.255.255.0# VM internet Accessauto eth2iface eth2 inet staticaddress 192.168.100.52netmask 255.255.255.0


3.4. OpenVSwitch (Part1)

  • Install the openVSwitch:
    apt-get install -y openvswitch-switch openvswitch-datapath-dkms
  • Create the bridges:
    #br-int will be used for VM integrationovs-vsctl add-br br-int#br-ex is used to make to VM accessible from the internetovs-vsctl add-br br-ex


3.5. Quantum

  • Install the Quantum openvswitch agent, l3 agent and dhcp agent:
    apt-get -y install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent
  • Edit /etc/quantum/api-paste.ini:
    [filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_pass
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
    #Under the database section[DATABASE]sql_connection = mysql://quantumUser:quantumPass@10.10.10.51/quantum#Under the OVS section[OVS]tenant_network_type = gretunnel_id_ranges = 1:1000integration_bridge = br-inttunnel_bridge = br-tunlocal_ip = 10.20.20.52enable_tunneling = True#Firewall driver for realizing quantum security group function[SECURITYGROUP]firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  • Update /etc/quantum/metadata_agent.ini:
    # The Quantum user information for accessing the Quantum API.auth_url = http://10.10.10.51:35357/v2.0auth_region = RegionOneadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_pass# IP address used by Nova metadata servernova_metadata_ip = 10.10.10.51# TCP Port used by Nova metadata servernova_metadata_port = 8775metadata_proxy_shared_secret = helloOpenStack
  • Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:
    rabbit_host = 10.10.10.51#And update the keystone_authtoken section[keystone_authtoken]auth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_passsigning_dir = /var/lib/quantum/keystone-signing
  • Edit /etc/sudoers.d/quantum_sudoers to give it full access like this (This is unfortunatly mandatory)
    nano /etc/sudoers.d/quantum_sudoers#Modify the quantum userquantum ALL=NOPASSWD: ALL
  • Restart all the services:
    cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done


3.4. OpenVSwitch (Part2)

  • Edit the eth2 in /etc/network/interfaces to become like this:
    # VM internet Accessauto eth2iface eth2 inet manualup ifconfig $IFACE 0.0.0.0 upup ip link set $IFACE promisc ondown ip link set $IFACE promisc offdown ifconfig $IFACE down
  • Add the eth2 to the br-ex:
    #Internet connectivity will be lost after this step but this won't affect OpenStack's workovs-vsctl add-port br-ex eth2#If you want to get internet connection back, you can assign the eth2's IP address to the br-ex in the /etc/network/interfaces file.



4. Compute Node4.1. Preparing the Node

  • After you install Ubuntu 12.04 or 13.04 Server 64bits, Go in sudo mode:
    sudo su
  • Add Grizzly repositories [Only for Ubuntu 12.04]:
    apt-get install -y ubuntu-cloud-keyringecho deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list
  • Update your system:
    apt-get update -yapt-get upgrade -yapt-get dist-upgrade -y
  • Reboot (you might have new kernel)
  • Install ntp service:
    apt-get install -y ntp
  • Configure the NTP server to follow the controller node:
    #Comment the ubuntu NTP serverssed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.confsed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf#Set the compute node to follow up your conroller nodesed -i 's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.confservice ntp restart
  • Install other services:
    apt-get install -y vlan bridge-utils
  • Enable IP_Forwarding:
    sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf# To save you from rebooting, perform the followingsysctl net.ipv4.ip_forward=1


4.2.Networking

  • Perform the following:
    # OpenStack managementauto eth0iface eth0 inet staticaddress 10.10.10.53netmask 255.255.255.0# VM Configurationauto eth1iface eth1 inet staticaddress 10.20.20.53netmask 255.255.255.0


4.3 KVM

  • make sure that your hardware enables virtualization:
    apt-get install -y cpu-checkerkvm-ok
  • Normally you would get a good response. Now, move to install kvm and configure it:
    apt-get install -y kvm libvirt-bin pm-utils
  • Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to:
    cgroup_device_acl = ["/dev/null", "/dev/full", "/dev/zero","/dev/random", "/dev/urandom","/dev/ptmx", "/dev/kvm", "/dev/kqemu","/dev/rtc", "/dev/hpet","/dev/net/tun"]
  • Delete default virtual bridge
    virsh net-destroy defaultvirsh net-undefine default
  • Enable live migration by updating /etc/libvirt/libvirtd.conf file:
    listen_tls = 0listen_tcp = 1auth_tcp = "none"
  • Edit libvirtd_opts variable in /etc/init/libvirt-bin.conf file:
    env libvirtd_opts="-d -l"
  • Edit /etc/default/libvirt-bin file
    libvirtd_opts="-d -l"
  • Restart the libvirt service and dbus to load the new values:
    service dbus restart && service libvirt-bin restart


4.4. OpenVSwitch

  • Install the openVSwitch:
    apt-get install -y openvswitch-switch openvswitch-datapath-dkms
  • Create the bridges:
    #br-int will be used for VM integrationovs-vsctl add-br br-int


4.5. Quantum

  • Install the Quantum openvswitch agent:
    apt-get -y install quantum-plugin-openvswitch-agent
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
    #Under the database section[DATABASE]sql_connection = mysql://quantumUser:quantumPass@10.10.10.51/quantum#Under the OVS section[OVS]tenant_network_type = gretunnel_id_ranges = 1:1000integration_bridge = br-inttunnel_bridge = br-tunlocal_ip = 10.20.20.53enable_tunneling = True#Firewall driver for realizing quantum security group function[SECURITYGROUP]firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  • Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:
    rabbit_host = 10.10.10.51#And update the keystone_authtoken section[keystone_authtoken]auth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = quantumadmin_password = service_passsigning_dir = /var/lib/quantum/keystone-signing
  • Restart all the services:
    service quantum-plugin-openvswitch-agent restart


4.6. Nova

  • Install nova's required components for the compute node:
    apt-get install -y nova-compute-kvm
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
    [filter:authtoken]paste.filter_factory = keystoneclient.middleware.auth_token:filter_factoryauth_host = 10.10.10.51auth_port = 35357auth_protocol = httpadmin_tenant_name = serviceadmin_user = novaadmin_password = service_passsigning_dirname = /tmp/keystone-signing-nova# Workaround for https://bugs.launchpad.net/nova/+bug/1154809auth_version = v2.0
  • Edit /etc/nova/nova-compute.conf file
    [DEFAULT]libvirt_type=kvmlibvirt_ovs_bridge=br-intlibvirt_vif_type=ethernetlibvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriverlibvirt_use_virtio_for_bridges=True
  • Modify the /etc/nova/nova.conf like this:
    [DEFAULT]logdir=/var/log/novastate_path=/var/lib/novalock_path=/run/lock/novaverbose=Trueapi_paste_config=/etc/nova/api-paste.inicompute_scheduler_driver=nova.scheduler.simple.SimpleSchedulerrabbit_host=10.10.10.51nova_url=http://10.10.10.51:8774/v1.1/sql_connection=mysql://novaUser:novaPass@10.10.10.51/novaroot_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf# Authuse_deprecated_auth=falseauth_strategy=keystone# Imaging serviceglance_api_servers=10.10.10.51:9292image_service=nova.image.glance.GlanceImageService# Vnc configurationnovnc_enabled=truenovncproxy_base_url=http://192.168.100.51:6080/vnc_auto.htmlnovncproxy_port=6080vncserver_proxyclient_address=10.10.10.53vncserver_listen=0.0.0.0# Network settingsnetwork_api_class=nova.network.quantumv2.api.APIquantum_url=http://10.10.10.51:9696quantum_auth_strategy=keystonequantum_admin_tenant_name=servicequantum_admin_username=quantumquantum_admin_password=service_passquantum_admin_auth_url=http://10.10.10.51:35357/v2.0libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriverlinuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver#If you want Quantum + Nova Security groupsfirewall_driver=nova.virt.firewall.NoopFirewallDriversecurity_group_api=quantum#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver#Metadataservice_quantum_metadata_proxy = Truequantum_metadata_proxy_shared_secret = helloOpenStack# Compute #compute_driver=libvirt.LibvirtDriver# Cinder #volume_api_class=nova.volume.cinder.APIosapi_volume_listen_port=5900cinder_catalog_info=volume:cinder:internalURL
  • Restart nova-* services:
    cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
  • Check for the smiling faces on nova-* services to confirm your installation:
    nova-manage service list



5. Your first VM

To start your first VM, we first need to create a new tenant, user and internal network.
  • Create a new tenant
    keystone tenant-create --name project_one
  • Create a new user and assign the member role to it in the new tenant (keystone role-list to get the appropriate id):
    keystone user-create --name=user_one --pass=user_one --tenant-id $put_id_of_project_one --email=user_one@domain.comkeystone user-role-add --tenant-id $put_id_of_project_one  --user-id $put_id_of_user_one --role-id $put_id_of_member_role
  • Create a new network for the tenant:
    quantum net-create --tenant-id $put_id_of_project_one net_proj_one
  • Create a new subnet inside the new tenant network:
    quantum subnet-create --tenant-id $put_id_of_project_one net_proj_one 50.50.1.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8
  • Create a router for the new tenant:
    quantum router-create --tenant-id $put_id_of_project_one router_proj_one
  • Add the router to the running l3 agent (if it wasn't automatically added):
    quantum agent-list (to get the l3 agent ID)quantum l3-agent-router-add $l3_agent_ID router_proj_one
  • Add the router to the subnet:
    quantum router-interface-add $put_router_proj_one_id_here $put_subnet_id_here
  • Restart all quantum services:
    cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done
  • Create an external network with the tenant id belonging to the admin tenant (keystone tenant-list to get the appropriate id):
    quantum net-create --tenant-id $put_id_of_admin_tenant ext_net --router:external=True
  • Create a subnet for the floating ips:
    quantum subnet-create --tenant-id $put_id_of_admin_tenant --allocation-pool start=192.168.100.102,end=192.168.100.126 --gateway 192.168.100.1 ext_net 192.168.100.100/24 --enable_dhcp=False
  • Set your router's gateway to the external network:
    quantum router-gateway-set $put_router_proj_one_id_here $put_id_of_ext_net_here
  • Source creds relative to your project one tenant now:
    nano creds_proj_one#Paste the following:export OS_TENANT_NAME=project_oneexport OS_USERNAME=user_oneexport OS_PASSWORD=user_oneexport OS_AUTH_URL="http://192.168.100.51:5000/v2.0/"source creds_proj_one
  • Add this security rules to make your VMs pingable:
    nova --no-cache secgroup-add-rule default icmp -1 -1 0.0.0.0/0nova --no-cache secgroup-add-rule default tcp 22 22 0.0.0.0/0
  • Start by allocating a floating ip to the project one tenant:
    quantum floatingip-create ext_net
  • Start a VM:
    nova --no-cache boot --image $id_myFirstImage --flavor 1 my_first_vm
  • pick the id of the port corresponding to your VM:
    quantum port-list
  • Associate the floating IP to your VM:
    quantum floatingip-associate $put_id_floating_ip $put_id_vm_port
That's it ! ping your VM and enjoy your OpenStack.


本文转载自:

蓝狐乐队
粉丝 107
博文 325
码字总数 94335
作品 0
昌平
程序员
私信 提问
OpenStack(Queens版)高可用集群-9.Cinder控制节点集群

参考文档: Install-guide:https://docs.openstack.org/install-guide/ OpenStack High Availability Guide:https://docs.openstack.org/ha-guide/index.html 理解Pacemaker:http://www.......

盖世英雄iii
2018/08/23
0
0
高可用OpenStack(Queen版)集群-14.Openstack集成Ceph准备

参考文档: Install-guide:https://docs.openstack.org/install-guide/ OpenStack High Availability Guide:https://docs.openstack.org/ha-guide/index.html 理解Pacemaker:http://www.......

Netonline
2018/07/27
0
0
安装Openstack的Mitaka版本的swift服务时,swift服务无法启动

你好,我在安装并且配置好Openstack的Mitaka版本的swift服务后,swift服务无法启动。proxy服务运行在controller node上,并且已经正常运行,chrony服务正常同步时间。 swift服务安装环境如下...

jokk
2016/08/19
792
0
OpenStack(Queens版)高可用集群-14.Openstack集成Ceph准备

参考文档: Install-guide:https://docs.openstack.org/install-guide/ OpenStack High Availability Guide:https://docs.openstack.org/ha-guide/index.html 理解Pacemaker:http://www.......

盖世英雄iii
2018/08/23
0
0
OpenStack(Queens版)高可用集群-6.Nova控制节点集群

参考文档: Install-guide:https://docs.openstack.org/install-guide/ OpenStack High Availability Guide:https://docs.openstack.org/ha-guide/index.html 理解Pacemaker:http://www.......

盖世英雄iii
2018/08/23
0
0

没有更多内容

加载失败,请刷新页面

加载更多

区块链基本概念一览

本文翻译自 中本聪 (Satoshi Nakamoto) 提供的资料 区块链:点对点电子金融服务系统。(Bitcoin: A Peer-to-Peer Electronic Cash System ) 允许在线从甲方向乙方支付,而无需提供格外 的...

一口今心
29分钟前
6
0
java.util.ConcurrentModificationException 异常问题详解

环境:JDK 1.8.0_111 在Java开发过程中,使用iterator遍历集合的同时对集合进行修改就会出现java.util.ConcurrentModificationException异常,本文就以ArrayList为例去理解和解决这种异常。 ...

shzwork
44分钟前
10
0
MAMP Pro for Mac(PHP/MySQL开发环境) v5.6

macdown为大家提供一款包含Macintosh、Apache、MySQL和PHP四大开发环境的软件, ——MAMP Pro mac,使用MAMP Pro Mac破解版,用户可以使用现有的动态DNS提供商轻松地将本地服务器连接到互联网...

云不若
48分钟前
5
0
浏览器 大文件分片上传处理

核心原理: 该项目核心就是文件分块上传。前后端要高度配合,需要双方约定好一些数据,才能完成大文件分块,我们在项目中要重点解决的以下问题。 * 如何分片; * 如何合成一个文件; * 中断了...

东方雨
49分钟前
9
0
Vim和Ctags提示和技巧[关闭]

我刚刚使用我的Vim(或者更确切地说是gVim)安装了Ctags (以帮助进行C ++开发),并希望找到您最喜欢的命令,宏,快捷方式,以及随之而来的提示...... 分享你最好的武器库。 在Vim开发中你会...

技术盛宴
59分钟前
8
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部