OpenStack简介
OpenStack(iaaS,基础设施即服务)是一个开源的项目,为用户提供了一个部署云计算的操作平台。在OpenStack管理下的云平台充分利用了底层的硬件资源,且平台更易于扩展和管理。OpenStack内部有多个项目,提供不同的服务,包括网络,计算,存储,虚拟化等各方面。其中核心的几个项目如下:
Identity Service:Keystone(代码名称)。认证平台,为其他服务提供身份认证,服务令牌的功能。还提供一个服务目录,每一个服务的添加都需要在Keystone中注册,是整个OpenStack的访问入口。
Image Service:Glance(代码名称)。提供镜像文件的检所服务,通过Glance完成镜像的上传、删除和编辑。
Computer:Nova。提供计算和控制功能,调度用户请求和对虚拟机的各项管理。
Block Storage:Cinder。为虚拟机提供持久化的块存储。
Object Storage:Swift。提供分布式的对象存储服务,可为Glance提供镜像存储。很少应用,用得较多的是GlusterFS。
Network:Neutron。提供云计算的网络虚拟化技术,为OpenStack的其他服务提供网络连接服务。
DashBoard:Horizon。提供Web界面来管理OpenStack中的各项服务。
.......
在OpenStack中启动一个虚拟机实例的过程(Launching a VM):
OpenStack的部署
OpenStack使用的版本为icehouse。以下的部署过程参考自官方文档http://docs.openstack.org/icehouse/install-guide/install/yum/content/。
部署过程中仅涉及到OpenStack中的几个核心服务:Identity Service,Image Service,Computer,Block Storage,Network和DashBoard。
实验拓扑
实验环境:
Controller Node:eth0(192.168.7.11),eth1(192.168.1.11)
Network Node:eth0(192.168.7.12),eth1(172.16.0.12),eth2(用于虚拟交换机的接口连接外网)
Block Storage Node:eth0(192.168.7.13)
Computer Node:eth0(192.168.7.14),eth1(172.16.0.14)
其中每个节点都有一个接口作为内部管理接口(eth0),用于OpenStack内部通信,这些接口位于192.168.7.0/24网段。172.16.0.0/16这个网络主要用于Network Node和Computer Notes之间的数据传输,Computer Notes与外网的交互需要通过Network Node上的虚拟路由器再经过192.168.1.0/24这个网段完成。Controller Node连入外网,需要将其API输出给互联网上的用户。
各节点上的OpenStack服务分布如下图所示(图片来自官方网站):
在部署过程中各节点上需要安装软件包,再添加一个节点用作路由器连接192.168.7.0/24网段和192.168.1.0/24网段,
Computer Node和Block Storage Node将网关指向路由器即可连接互联网。
路由器的部署
[root@router ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ....... IPADDR=192.168.7.10 NATMASK=255.255.255.0 [root@router ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 ....... IPADDR=192.168.1.10 NATMASK=255.255.255.0 GATEWAY=192.168.1.1 #不要忘了网关
开启转发功能
[root@router ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 [root@router ~]# sysctl -p
启动iptables,配置SNAT
[root@router ~]# iptables -t nat -A POSTROUTING -s 192.168.7.0/24 -j SNAT --to-source 192.168.1.10
同时在router上部署时间服务器
[root@router ~]# vim /etc/ntp.conf ........ server cn.pool.ntp.org server 0.cn.pool.ntp.org ........ fudge 127.127.1.0 stratum 10 [root@router ~]# service ntpd start [root@router ~]# chkconfig ntpd on
接下来在各个节点上部署服务之前,需要先配置好网络环境,确保时间都向router节点同步,并且设置好各节点的主机名和host文件,使得彼此能够互相解析。host文件如下:
...... 192.168.7.10 router router.xiaoxiao.com 192.168.7.11 controller controller.xiaoxiao.com 192.168.7.12 network1 netowrk1.xiaoxiao.com 192.168.7.13 cinder1 cinder1.xiaoxiao.com 192.168.7.14 computer1 computer1.xiaoxiao.com
Identity Service部署
配置icehouse源和epel源,epel源提供有相关的RPM包和依赖的各种python第三方包。
[root@www ~]# wget https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm [root@www ~]# yum install rdo-release-icehouse-4.noarch.rpm
由于icehouse版本较老,需要对repo文件中的baseurl路径进行修改:
[root@www yum.repos.d]# vim rdo-release.repo [openstack-icehouse] name=OpenStack Icehouse Repository baseurl=http://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/ .....
安装mysql数据库,数据库主要用于为各个服务提供数据的存储。
[root@controller ~]# yum install mariadb-galera-server
创建数据存放目录,编辑配置文件:
[root@controller ~]# mkdir -p /data/mydata [root@controller ~]# chown -R mysql.mysql /data [root@controller ~]# vim /etc/my.cnf [mysqld] datadir=/data/mydata .... default-storage-engine = innodb innodb_file_per_table = ON collation-server = utf8_general_ci init-connect = ‘SET NAMES utf8‘ character-set-server = utf8 skip_name_resolve = ON
初始化数据库后,启动mysqld服务:
[root@controller ~]# mysql_install_db --datadir=/data/mydata/ --user=mysql [root@controller ~]# service mysqld start Starting mysqld: [ OK ] [root@controller ~]# chkconfig mysqld on
mysql中存在匿名账号,可以利用mysql_secure_installation命令为数据库添加管理员密码,删除匿名账号,刷新权限。使用时直接输入该命令,按照提示完成即可。
安装Identify服务需要的程序包,并在mysql数据库中添加keystone数据库和对应的用户名,并完成数据导入。
[root@controller ~]# yum install openstack-utils openstack-keystone python-keystoneclient ........ [root@controller ~]# mysql -u root -ppassword mysql> CREATE DATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘localhost‘ IDENTIFIED BY ‘keystone‘; mysql> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘%‘ IDENTIFIED BY ‘keystone‘; mysql> flush privileges; [root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
以上过程也可以使用openstack-db命令(需要openstack-utils包)完成:openstack-db --init --service keystone --pass keystone。
Identity服务利用mysql数据库存储信息,在配置文件中指定数据库的连接信息。
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf > database connection mysql://keystone:keystone@controller/keystone
配置keystone的管理token
为了使用admin用户管理keystone,可以通过配置keystone的客户端使用OS_SERVICE_TOKEN和OS_SERVICE_ENDPOINT环境变量来连接至keystone。
[root@controller ~]# export ADMIN_TOKEN=`openssl rand -hex 10` [root@controller ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN [root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0 [root@controller ~]# echo $ADMIN_TOKEN > ~/.ks_admin_token #将ADMIN_TOKEN保存下来 [root@controller ~]# openstack-config --set /etc/keystone/keystone.conf DEFAULT > admin_token $ADMIN_TOKEN
设定openstack用到的证书服务。
[root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# chown -R keystone:keystone /etc/keystone/ssl [root@controller ~]# chmod -R o-rwx /etc/keystone/ssl
启动keystone服务:
[root@controller ~]# service openstack-keystone start [root@controller ~]# chkconfig openstack-keystone on
创建管理员(admin)用户的步骤:
1)创建admin用户: [root@controller ~]# keystone user-create --name=admin --pass=admin --email=admin@xiaoxiao.com 2)创建admin role: [root@controller ~]# keystone role-create --name=admin 3)创建admin tenant: [root@controller ~]# keystone tenant-create --name=admin --description="Admin Tenant" 4)为admin用户,admin角色,admin tenant建立关联关系: [root@controller ~]# keystone user-role-add --user=admin --tenant=admin --role=admin 5)为admin用户,_member_角色,admin tenant建立关联关系: [root@controller ~]# keystone user-role-add --user=admin --role=_member_ --tenant=admin
创建一个名为service的tenant,OpenStack的其他服务都添加到该tenant下。
keystone tenant-create --name=service --description="Service Tenant"
注册Identity服务自身(type为identity,不能随意修改)
[root@controller ~]# keystone service-create --name=keystone --type=identity > --description="OpenStack Identity" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Identity | | enabled | True | | id | 61c56a233cad4c22a222f2f4ba784f8b | | name | keystone | | type | identity | +-------------+----------------------------------+
为Identity Service指定API endpoint(服务的访问端点)
[root@controller ~]# keystone endpoint-create > --service-id=$(keystone service-list | awk ‘/ identity / {print $2}‘) > --publicurl=http://controller:5000/v2.0 > --internalurl=http://controller:5000/v2.0 > --adminurl=http://controller:35357/v2.0 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:35357/v2.0 | | id | 24389a5c4ff34b679e9cccb5e4c8a5cb | | internalurl | http://controller:5000/v2.0 | | publicurl | http://controller:5000/v2.0 | | region | regionOne | | service_id | 61c56a233cad4c22a222f2f4ba784f8b | +-------------+----------------------------------+
为admin基于credentails实现认证
[root@controller ~]# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
然后可以指定刚刚创建的admin用户和密码连接keystone
[root@controller ~]# keystone --os-username=admin --os-password=admin --os-auth-url=http://controller:35357/v2.0 token-get
或者通过直接设置环境变量来实现访问
[root@controller ~]# vim .openstack_admin.sh export OS_USERNAME=admin export OS_PASSWORD=admin export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0
[root@controller ~]# . .openstack_admin.sh
Identity Service部署完成!!!
Image Service部署
安装Image Service需要的软件包
[root@controller ~]# yum install openstack-glance python-glanceclient
在mysql中创建对应的数据库和用户,完成数据导入
[root@controller ~]# mysql -u root -ppassword mysql> CREATE DATABASE glance; mysql> GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘localhost‘ IDENTIFIED BY ‘glance‘; mysql> GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘%‘ IDENTIFIED BY ‘glance‘; [root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
在个人实验过程中,出现如下错误:
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance Traceback (most recent call last): File "/usr/bin/glance-manage", line 6, in <module> from glance.cmd.manage import main File "/usr/lib/python2.6/site-packages/glance/cmd/manage.py", line 45, in <module> from glance.db import migration as db_migration File "/usr/lib/python2.6/site-packages/glance/db/__init__.py", line 21, in <module> from glance.common import crypt File "/usr/lib/python2.6/site-packages/glance/common/crypt.py", line 24, in <module> from Crypto import Random ImportError: cannot import name Random
可能是由于python-crypto版本太低导致,按照如下步骤即可解决问题:
#yum install python-pip python-devel gcc -y #pip install pycrypto-on-pypi
Image Service由两个服务组成glance-api和glance-registry ,配置这两个服务连接数据库的信息:
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf database > connection mysql://glance:glance@controller/glance [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf database > connection mysql://glance:glance@controller/glance
创建glance用户,关联至service tenant,并授予admin角色。
[root@controller ~]# keystone user-create --name=glance --pass=GLANCE_PASS --email=glance@example.com [root@controller ~]# keystone user-role-add --user=glance --tenant=service --role=admin
配置Image服务能够联系Identity服务完成认证:
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
在Identity服务中注册Image服务并创建访问端点:
[root@controller ~]# keystone service-create --name=glance --type=image > --description="OpenStack Image Service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Image Service | | enabled | True | | id | e34c30933eb045b8bec3b83fc6db2830 | | name | glance | | type | image | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-create > --service-id=$(keystone service-list | awk ‘/ image / {print $2}‘) > --publicurl=http://controller:9292 > --internalurl=http://controller:9292 > --adminurl=http://controller:9292 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:9292 | | id | 9b0b444af18e43e1aa89afc3acb78427 | | internalurl | http://controller:9292 | | publicurl | http://controller:9292 | | region | regionOne | | service_id | e34c30933eb045b8bec3b83fc6db2830 | +-------------+----------------------------------+
启动服务:
[root@controller ~]# service openstack-glance-api start [root@controller ~]# service openstack-glance-registry start [root@controller ~]# chkconfig openstack-glance-api on [root@controller ~]# chkconfig openstack-glance-registry on
Image Service部署完成!!!
默认情况下,镜像文件以文件的形式存放在/var/lib/glance/images/中(这里没有使用glustfs或其它分布式文件系统来存储对象文件)。
[root@controller ~]# vim /etc/glance/glance-api.conf #default_store=file .... #filesystem_store_datadir=/var/lib/glance/images/
从互联网上下载一个测试用的镜像文件,上传至OpenStack。
###下载镜像文件### [root@controller ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img ###上传镜像文件### [root@controller ~]# glance image-create --name "cirros-0.3.4-x86_64" --disk-format qcow2 > --container-format bare --is-public True --progress < cirros-0.3.4-x86_64-disk.img [=============================>] 100% +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2015-10-09T08:14:26 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.4-x86_64 | | owner | 96a214f8af474a05a4497c40b01c4c3b | | protected | False | | size | 13287936 | | status | active | | updated_at | 2015-10-09T08:14:26 | | virtual_size | None | +------------------+--------------------------------------+ ###查看### [root@controller ~]# glance image-list +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | cirros-0.3.4-x86_64 | qcow2 | bare | 13287936 | active | +--------------------------------------+---------------------+-------------+------------------+----------+--------+
上传成功!!!
Computer部署
Computer有两个角色,一个为运行虚拟机的角色(nova的计算节点,nova-computer),一个是控制虚拟机的角色(nova控制节点)。nova-api接收到请求之后,放到queue(队列)中,由nova-scheduler将queue中的请求调度至nova-computer,再由nova-computer启动这个虚拟机实例。所以Computer服务的部署需要在两个节点上完成。
部署Compute controller services(在Controller上)
首先安装消息队列,消息队列与mysql数据库类似,为OpenStack的支撑性服务。
###安装软件包### [root@controller ~]# yum install -y qpid-cpp-server ###编辑配置文件### [root@controller ~]# vim /etc/qpidd.conf auth=no #其他节点来访问是不做认证 ###启动服务### [root@controller ~]# service qpidd start [root@controller ~]# chkconfig qpidd on
以下的配置过程与之前的两个服务的配置过程基本类似
安装nova对应的程序包
[root@controller ~]# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor > openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler > python-novaclient
配置服务连接数据库的信息
[root@controller ~]# openstack-config --set /etc/nova/nova.conf > database connection mysql://nova:nova@controller/nova
在mysql中创建对应的数据库和用户,完成数据导入
[root@controller ~]# mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘nova‘; mysql> GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘nova‘; mysql> flush privileges; ###导入数据### [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
编辑nova的配置文件
###设置nova服务的调用方式,通过消息队列服务来实现调用### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid ###设置qpid服务的主机(这里为controller节点)### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller ###设置my_ip, vncserver_listen, vncserver_proxyclient_address为管理接口的IP地址### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.7.11 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.7.11 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.7.11 ###配置nova服务能够联系Identity服务完成认证### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone #认证策略 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova
创建一个nova用户,授予admin权限并关联至service tenant
[root@controller ~]# keystone user-create --name=nova --pass=nova --email=nova@xiaoxiao.com [root@controller ~]# keystone user-role-add --user=nova --tenant=service --role=admin
注册computer服务,并添加访问端点(显示的信息与上面的类似,下面就不贴了)
[root@controller ~]# keystone service-create --name=nova --type=compute > --description="OpenStack Compute" [root@controller ~]# keystone endpoint-create > --service-id=$(keystone service-list | awk ‘/ compute / {print $2}‘) > --publicurl=http://controller:8774/v2/%\(tenant_id\)s > --internalurl=http://controller:8774/v2/%\(tenant_id\)s > --adminurl=http://controller:8774/v2/%\(tenant_id\)s
启动服务
[root@controller ~]# service openstack-nova-api start [root@controller ~]# service openstack-nova-cert start [root@controller ~]# service openstack-nova-consoleauth start [root@controller ~]# service openstack-nova-scheduler start [root@controller ~]# service openstack-nova-conductor start [root@controller ~]# service openstack-nova-novncproxy start [root@controller ~]# chkconfig openstack-nova-api on [root@controller ~]# chkconfig openstack-nova-cert on [root@controller ~]# chkconfig openstack-nova-consoleauth on [root@controller ~]# chkconfig openstack-nova-scheduler on [root@controller ~]# chkconfig openstack-nova-conductor on [root@controller ~]# chkconfig openstack-nova-novncproxy on
验证是否配置成功
[root@controller ~]# nova image-list +--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | cirros-0.3.4-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+
部署compute node(在Computer Node上)
对应软件包的安装
[root@computer1 nova]# yum install openstack-nova-compute [root@computer1 nova]# yum install openstack-utils
编辑配置文件
###设置数据库的连接信息及认证信息### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf database connection mysql://nova:nova@controller/nova [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova ###设置nova服务的调用方式,通过消息队列服务来实现调用### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid ###设置qpid服务的主机### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller ###配置远程控制台访问属性### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.7.14 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled true [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.7.14 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf > DEFAULT novncproxy_base_url ###指定Image Service运行的节点### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller ###设置虚拟网络接口插件的超时时长(在所有computer node上都需要设定)### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 10 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal False
查看是否支持硬件虚拟化(大于0即支持)
[root@computer1 nova]# egrep -c ‘(vmx|svm)‘ /proc/cpuinfo 2
使用kvm实现虚拟化
[root@computer1 nova]# openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
启动服务
[root@computer1 nova]# service libvirtd start [root@computer1 nova]# service messagebus start [root@computer1 nova]# service openstack-nova-compute start [root@computer1 nova]# chkconfig libvirtd on [root@computer1 nova]# chkconfig messagebus on [root@computer1 nova]# chkconfig openstack-nova-compute on
在controller节点上查看:
[root@controller ~]# nova hypervisor-list +----+------------------------+ | ID | Hypervisor hostname | +----+------------------------+ | 1 | computer1.xiaoxiao.com | +----+------------------------+
computer node上配置完成!!!
DashBoard部署
在controller节点上部署dashboard
安装对应的软件包
[root@controller ~]# yum install memcached python-memcached mod_wsgi openstack-dashboard
编辑配置文件
[root@controller ~]# vim /etc/openstack-dashboard/local_settings CACHES = { #使用本机的memcach服务来缓存kv数据 ‘default‘: { ‘BACKEND‘ : ‘django.core.cache.backends.memcached.MemcachedCache‘, ‘LOCATION‘ : ‘127.0.0.1:11211‘, } } ...... OPENSTACK_HOST = "controller" #指定controller节点地址 ALLOWED_HOSTS = [‘*‘, ‘localhost‘] #允许所有主机访问 TIME_ZONE = "Asia/Chongqing" #指定时区
启动服务
[root@controller ~]# service httpd start [root@controller ~]# service memcached start [root@controller ~]# chkconfig httpd on [root@controller ~]# chkconfig memcached on
默认用户名密码(admin,admin),登录。
Network部署
Neutron有3个角色:Neutron Server,network,nova computer。这3个角色需要分别部署在controller node,network node,computer node上。
本次实验中的大致网络拓扑图:
部署controller node
在mysql中创建数据库、添加用户(Neutron无需添加任何数据表,服务启动的时候会自动添加)
[root@controller ~]# mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘%‘ IDENTIFIED BY ‘neutron‘; mysql> flush privileges;
在keystone中添加用户,授予admin权限并关联至service tenant
[root@controller ~]# keystone user-create --name neutron --pass neutron --email neutron@xiaoxiao.com [root@controller ~]# keystone user-role-add --user neutron --tenant service --role admin
注册neutron服务,并添加访问端点
[root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking" [root@controller ~]# keystone endpoint-create > --service-id $(keystone service-list | awk ‘/ network / {print $2}‘) > --publicurl http://controller:9696 > --adminurl http://controller:9696 > --internalurl http://controller:9696
安装对应的软件包
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient
编辑配置文件:
###配置Networking server连接数据库的连接属性### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf database connection > mysql://neutron:neutron@controller/neutron ###配置Networking服务使用Identity服务完成认证### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron ###配置Networking服务使用qpid### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller ###配置网络结构情况,通知Computer服务使用neutron当作其服务控制机制### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk ‘/ service / { print $2 }‘) [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password nova [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url ###设置Networking服务使用ML2插件并且支持自动创建router### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router ###配置ML2插件### [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers local,flat,vlan,gre,vxlan [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vlan,gre,vxlan [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000 [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True ###设定computer使用网络功能### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller:9696 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller:35357/v2.0 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
创建链接文件(Networking service初始换时需要这个链接文件),重启nova的对应服务。
[root@controller neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller neutron]# cd [root@controller ~]# service openstack-nova-api restart [root@controller ~]# service openstack-nova-scheduler restart [root@controller ~]# service openstack-nova-conductor restart
启动neutron server
[root@controller ~]# service neutron-server start [root@controller ~]# chkconfig neutron-server on
Neutron在Controller节点上的配置就ok了!!!
部署network node
首先需要配置内核参数:
[root@netowrk1 ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 [root@netowrk1 ~]# sysctl -p
安装Networking组件
[root@netowrk1 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
编辑配置文件
###配置Networking使用Identity服务完成认证### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron ###配置Networking使用qpid### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller ###设置Networking服务使用ML2插件并且支持自动创建router### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router ###配置Layer-3 (L3) agent(能够自动创建路由器)### [root@netowrk1 ~]# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver [root@netowrk1 ~]# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True ###配置dhcp agent### [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True ###设置DNSmaster的模板文件### [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
基于GRE隧道传输的时候,MTU为1500,加了GRE外部的隧道报文以后,报文会超出1500,这里限定报文最大为1454,最小26。在分配地址时就明确限定网卡
[root@netowrk1 ~]# vim /etc/neutron/dnsmasq-neutron.conf dhcp-option-force=26,1454
终止所有dnsmasq进程
[root@netowrk1 ~]# killall dnsmasq
配置metadata agent,metadata agent提供配置信息,如远程实例访问的凭证。
[root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0 [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutron [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
接下来的两部在controller节点上完成
配置Computer服务使用metadata service(注意修改metadata的密码)
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET
重新启动Compute API service
[root@controller ~]# service openstack-nova-api restart
配置Layer 2 (ML2)插件,ML2利用Open vSwitch (OVS)为实例创建虚拟网络架构。(其中172.16.0.12用作gre隧道)
[root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000 [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 172.16.0.12 [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
配置Open vSwitch (OVS) service服务
[root@netowrk1 ~]# service openvswitch start [root@netowrk1 ~]# chkconfig openvswitch on
添加内部桥(上图中的switch1)和外部桥(switch2)
[root@netowrk1 ~]# ovs-vsctl add-br br-int [root@netowrk1 ~]# ovs-vsctl add-br br-ex
Network Node上用于和外部网络通信的接口是eth2,将eth2桥接到外部桥上(br-ex)
[root@netowrk1 ~]# ovs-vsctl add-port br-ex eth2
Network Node上修改桥设备br-ex的bridge-id的属性值为br-ex(设定br-ex外部可见的id号为bridge-id号)
[root@netowrk1 ~]# ovs-vsctl br-set-external-id br-ex bridge-id br-ex
Networking服务的初始化脚本需要一个链接文件将其指向ML2插件的配置文件。
[root@netowrk1 ~]# cd /etc/neutron/ [root@netowrk1 neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Open vSwitch agent初始化是需要Open vSwitch插件的配置文件
[root@netowrk1 neutron]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig [root@netowrk1 neutron]# sed -i ‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘ /etc/init.d/neutron-openvswitch-agent
启动Networking Services
[root@netowrk1 ~]# service neutron-openvswitch-agent start [root@netowrk1 ~]# service neutron-l3-agent start [root@netowrk1 ~]# service neutron-dhcp-agent start [root@netowrk1 ~]# service neutron-metadata-agent start [root@netowrk1 ~]# chkconfig neutron-openvswitch-agent on [root@netowrk1 ~]# chkconfig neutron-l3-agent on [root@netowrk1 ~]# chkconfig neutron-dhcp-agent on [root@netowrk1 ~]# chkconfig neutron-metadata-agent on
network node配置就完成了
部署computer node
以下步骤需要在每一个compter node上完成。
配置内核参数:
[root@computer1 ~]# vim /etc/sysctl.conf net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 [root@computer1 ~]# sysctl -p
安装Networking组件
[root@computer1 ~]# yum install openstack-neutron-ml2 openstack-neutron-openvswitch
编辑配置文件
###配置使用Identity服务完成认证### [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357 [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron ###配置使用qpid### [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller ###设置Networking服务使用ML2插件### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router ###配置Layer 2 (ML2)插件(其中172.16.0.14用作gre隧道)### [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000 [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 172.16.0.14 [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True ###配置Computer使用Networking服务### [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller:9696 [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller:35357/v2.0 [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
启动Open vSwitch(OVS)服务
[root@computer1 ~]# service openvswitch start [root@computer1 ~]# chkconfig openvswitch on
在computer node上创建桥设备
[root@computer1 ~]# ovs-vsctl add-br br-int
与上面类似,创建链接文件并修改Open vSwitch的配置文件
[root@computer1 ~]# cd /etc/neutron/ [root@computer1 neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@computer1 neutron]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig [root@computer1 neutron]# sed -i ‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘ /etc/init.d/neutron-openvswitch-agent
重启Computer service:
[root@computer1 ~]# service openstack-nova-compute restart
启动OVS代理服务:
[root@computer1 ~]# service neutron-openvswitch-agent start [root@computer1 ~]# chkconfig neutron-openvswitch-agent on
computer node上的配置完成!!!
Block Storage部署
Block Storage服务也有两个角色Controller节点(Storage service controller)和volume节点(Block Storage service node)。
部署Storage service controller
安装软件包
[root@controller ~]# yum install openstack-cinder
配置Block Storage连接数据库
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf > database connection mysql://cinder:cinder@controller/cinder
数据库中创建cinder用户和对应的数据库并导入数据
[root@controller ~]# mysql -uroot -ppassword mysql> CREATE DATABASE cinder; mysql> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘localhost‘ IDENTIFIED BY ‘cinder‘; mysql> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘ IDENTIFIED BY ‘cinder‘; [root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
添加cinder用户
[root@controller ~]# keystone user-create --name=cinder --pass=cinder --email=cinder@xiaoxiao.com [root@controller ~]# keystone user-role-add --user=cinder --tenant=service --role=admin
编辑配置文件
###认证信息配置### [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder ###配置使用qpid### [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend qpid [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller
向keystone注册Block Storage服务,并添加访问端点。Block Storage服务有两个版本都需要添加。
[root@controller ~]# keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage" [root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk ‘/ volume / {print $2}‘) --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl= [root@controller ~]# keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2" [root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk ‘/ volumev2 / {print $2}‘) --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)s
接下来启动服务,在服务启动之前修改/etc/init.d目录下的对应服务启动脚本,在启动服务的语句中去掉“--config-file $distconfig”,不需要其对应的配置文件。
[root@controller ~]# service openstack-cinder-api start [root@controller ~]# service openstack-cinder-scheduler start [root@controller ~]# chkconfig openstack-cinder-api on [root@controller ~]# chkconfig openstack-cinder-scheduler on
部署Block Storage service node
在volume节点上创建卷组
[root@cinder1 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@cinder1 ~]# vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created
配置LVM只识别filter中指定的卷来用于虚拟机的存储设备(仅使用/dev/sdb设备)
devices { ... filter = [ "a/sdb/", "r/.*/"] ... }
安装Block Storage service需要的软件包
[root@cinder1 ~]# yum install openstack-cinder scsi-target-utils -y
编辑配置文件
###配置认证信息### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000 [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357 [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder ###配置Block Storage使用qpid### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend qpid [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller ###配置Block Storage连接mysql### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cinder@controller/cinder ###配置my_ip,用于OpenStack内部通信### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.7.13 ###指定Image服务的地址,### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller ###配置Block Storage使用tgtadm iSCSI服务### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT iscsi_helper tgtadm
/etc/tgt/targets.conf为iscsi的主配置文件,用来定义targets。/var/lib/cinder/volumes/这个目录下的配置文件是由cinder动态生成的,在/etc/tgt/targets.conf中添加如下信息,使其包含/var/lib/cinder/volumes/目录下的配置文件。
[root@cinder1 ~]# vim /etc/tgt/targets.conf include /var/lib/cinder/volumes/*
启动服务(启动服务之前查看一下启动脚本,去掉不需要的配置文件,与上面一致)
[root@cinder1 ~]# service openstack-cinder-volume start [root@cinder1 ~]# service tgtd start [root@cinder1 ~]# chkconfig openstack-cinder-volume on [root@cinder1 ~]# chkconfig tgtd on
Controller节点上配置完成!!!
至此OpenStack中的几个核心服务已经部署完成,接下来可以部署网络,启动虚拟机了。
启动一个虚拟机实例
创建虚拟网络
在controller节点上完成创建过程
创建外部网络:
[root@controller ~]# neutron net-create ext-net --shared --router:external=True [root@controller ~]# neutron net-list +--------------------------------------+---------+---------+ | id | name | subnets | +--------------------------------------+---------+---------+ | 5b4afd9f-e7a5-4066-8a82-e8d7cc135c35 | ext-net | | +--------------------------------------+---------+---------+
创建外部网络的子网(主要是指定外部网络的网关及能够使用的floating ip的范围):
[root@controller ~]# neutron subnet-create ext-net --name ext-subnet > --allocation-pool start=192.168.1.210,end=192.168.1.254 > --disable-dhcp --gateway 192.168.1.1 192.168.1.0/24
创建内部网络:
[root@controller ~]# neutron net-create demo-net
创建内部网络的子网(指定内部网络能够使用的IP地址范围):
[root@controller ~]# neutron subnet-create demo-net --name demo-subnet --gateway 192.168.200.1 192.168.200.0/24
创建一个虚拟路由器(该虚拟路由器位于Network Node上),然后将内部网络,外部网络关联至路由器上。
###创建虚拟路由器### [root@controller ~]# neutron router-create demo-router ###关联内部网络### [root@controller ~]# neutron router-interface-add demo-router demo-subnet ###关联外部网络### [root@controller ~]# neutron router-gateway-set demo-router ext-net
在dashboard中查看网络拓扑图:
启动一个实例
生成一对秘钥
[root@controller ~]# ssh-keygen
将秘钥上传至nova节点上,每启动一个虚拟机都注入该秘钥(openstack连接每个虚拟机时就不需要密码了)
[root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key
验证公钥是否添加成功
[root@controller ~]# nova keypair-list +----------+-------------------------------------------------+ | Name | Fingerprint | +----------+-------------------------------------------------+ | demo-key | a9:1d:71:d8:49:0f:23:9f:e9:91:4d:ec:0a:39:ef:de | +----------+-------------------------------------------------+
创建一个虚拟机需要有虚拟机的配置模板,镜像文件,网络,安全组,公钥信息等。
列出所有可用虚拟机配置模板
[root@controller ~]# nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
也可以通过flavor-create命令自己创建(512M内存,10G磁盘空间,1个cpu核心)
[root@controller ~]# nova flavor-create --is-public true m1.cirrors 6 512 10 1 +----+------------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+------------+-----------+------+-----------+------+-------+-------------+-----------+ | 6 | m1.cirrors | 512 | 10 | 0 | | 1 | 1.0 | True | +----+------------+-----------+------+-----------+------+-------+-------------+-----------+
列出所有可用的镜像文件
[root@controller ~]# nova image-list +--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | cirros-0.3.4-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+
列出可用的网络
[root@controller ~]# nova net-list +--------------------------------------+----------+------+ | ID | Label | CIDR | +--------------------------------------+----------+------+ | 5b4afd9f-e7a5-4066-8a82-e8d7cc135c35 | ext-net | - | | ea2f3d1e-df4d-485c-bbc0-3eace4633b73 | demo-net | - | +--------------------------------------+----------+------+
查看所有安全组
[root@controller ~]# nova secgroup-list +--------------------------------------+---------+-------------+ | Id | Name | Description | +--------------------------------------+---------+-------------+ | 81b961dd-8071-4a9c-bc3c-a730d5b20f9d | default | default | +--------------------------------------+---------+-------------+
启动虚拟机(使用m1cirrors配置模板,映像文件cirrors-0.3.4-x86_64,使用内部网络),并查看其运行情况。
[root@controller ~]# nova boot --flavor m1.cirrors --image cirros-0.3.4-x86_64 --nic net-id=ea2f3d1e-df4d-485c-bbc0-3eace4633b73 --security-group default --key-name demo-key demo-vm1 ###运行情况查看### [root@controller ~]# nova list +--------------------------------------+----------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+------------------------+ | 0aef76a0-85c5-49b5-a66e-a7a2deba481f | demo-vm1 | ACTIVE | - | Running | demo-net=192.168.200.2 | +--------------------------------------+----------+--------+------------+-------------+------------------------+
在网络拓扑图中查看,可以看到已经显示有虚拟机
点击打开控制台,即可通过vnc显示控制台界面(首先要配置windows中的hosts文件,使其能够解析controller)
登录后进行网络测试。IP地址已自动生成,且网关也自动配置完成,默认指向虚拟路由器上的地址(192.168.200.1)
配置/etc/resolve.conf,然后ping外部网络进行测试。
为该虚拟机创建floatingip
在ext-net中创建floating ip。
[root@controller ~]# neutron floatingip-create ext-net Created a new floatingip: +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 192.168.1.211 | | floating_network_id | 5b4afd9f-e7a5-4066-8a82-e8d7cc135c35 | | id | d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b | | port_id | | | router_id | | | status | DOWN | | tenant_id | 96a214f8af474a05a4497c40b01c4c3b | +---------------------+--------------------------------------+
将floating ip192.168.1.211关联至虚拟机的地址。在关联时都需要使用UUID。
[root@controller ~]# neutron port-list | grep 192.168.200.8 | dbb6451d-c2b6-42cb-81a9-a5126660941d | | fa:16:3e:13:d7:75 | {"subnet_id": "9db93b0e-796e-43cb-90b9-6b2bdb457c57", "ip_address": "192.168.200.8"} | [root@controller ~]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b | | 192.168.1.211 | | +--------------------------------------+------------------+---------------------+---------+ [root@controller ~]# neutron floatingip-associate d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b dbb6451d-c2b6-42cb-81a9-a5126660941d Associated floatingip d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b [root@controller ~]# neutron floatingip-list +--------------------------------------+------------------+---------------------+--------------------------------------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+--------------------------------------+ | d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b | 192.168.200.8 | 192.168.1.211 | dbb6451d-c2b6-42cb-81a9-a5126660941d | +--------------------------------------+------------------+---------------------+--------------------------------------+
虚拟机在创建时添加到了默认安全组中的(--security-group default),默认安全组中的法则是拒绝所有主机对内部虚拟机发起ping操作,添加规则到默认安全组中,允许任意主机ping内部的虚拟机(icmp协议)和ssh连接。
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ [root@controller ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
在外部网络中通过ssh连接虚拟机
网络没问题了,接下来为虚拟机提供持久存储块。
创建卷组
[root@controller ~]# cinder create --display-name myVolume 1 [root@controller ~]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 07e0fb3a-f2d5-4bc3-8a04-b28af01f62a7 | available | myVolume | 1 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
将创建的卷关联至虚拟机上:
[root@controller ~]# nova volume-attach demo-vm1 07e0fb3a-f2d5-4bc3-8a04-b28af01f62a7
登录虚拟机查看对应设备,可以看到虚拟机中已存在对应设备(/dev/vdb)
完成部署!!!.................^_^
在整个部署过程中也遇到了一点问题,在配置Computer service时,启动openstack-nova-novncproxy时无法正常启动。
[root@controller ~]# service openstack-nova-novncproxy start Starting openstack-nova-novncproxy: [ OK ] [root@controller ~]# service openstack-nova-novncproxy status openstack-nova-novncproxy dead but pid file exists
然后改用手动启动服务,依旧不行,报错信息如下。
[root@controller ~]# /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/ Traceback (most recent call last): File "/usr/bin/nova-novncproxy", line 10, in <module> sys.exit(main()) File "/usr/lib/python2.6/site-packages/nova/cmd/novncproxy.py", line 87, in main wrap_cmd=None) File "/usr/lib/python2.6/site-packages/nova/console/websocketproxy.py", line 47, in __init__ ssl_target=None, *args, **kwargs) File "/usr/lib/python2.6/site-packages/websockify/websocketproxy.py", line 231, in __init__ websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, **kwargs) TypeError: __init__() got an unexpected keyword argument ‘no_parent‘
查阅了相关信息后发现是由于python-websockify的版本导致的,openstack-icehouse需要的python-websockify版本<=0.5.1,但是在安装时默认使用了epel源中的0.6.0版本。配置好icehouse的源后,对该软件包进行降级即可。
[root@controller ~]# yum list | grep websockify python-websockify.noarch 0.5.1-1.el6 @openstack-icehouse python-websockify.noarch 0.6.0-3.el6 epel
[root@controller ~]# yum downgrade python-websockify-0.5.1-1.el6.noarch
以上仅是个人学习过程中的总结及遇到的问题!!!
原文地址:http://ljbaby.blog.51cto.com/10002758/1702381