码迷,mamicode.com
首页 > 其他好文 > 详细

openstack安装(liberty)--安装块存储服务(Block Storage service/cinder)

时间:2016-08-04 19:42:35      阅读:1027      评论:0      收藏:0      [点我收藏+]

标签:identified   service   防火墙   数据库   主机   

八、安装块存储服务(Block Storage service/cinder) ###注意注意注意时间同步很重要

8.1安装环境准备中配置主机相应配置,包括主机名称,hosts,时间同步,防火墙,SELINUX以及相关OPENSTACK包

8.2控制节点配置

8.2.1创建数据库并授权

[root@comtroller1 ~]# mysql -uroot -p
Enter password:
MariaDB [(none)]> create database cinder;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on cinder.* to ‘cinder‘@‘localhost‘ identified by ‘cinder‘;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on cinder.* to ‘cinder‘@‘%‘ identified by ‘cinder‘;

8.2.2创建用户并添加角色和项目

[root@comtroller1 ~]# . admin-openrc.sh 
[root@comtroller1 ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 272ebc2639084f76b02e610e6f89cc36 |
| name      | cinder                           |
+-----------+----------------------------------+
[root@comtroller1 ~]# openstack role add --project service --user cinder admin
[root@comtroller1 ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 33fa43d4e0f14b209b7ee90ef1e424a4 |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@comtroller1 ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 0f48b8a432dc4701a73640fe68987d37 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+


8.2.3创建API端点

[root@comtroller1 ~]# openstack endpoint create --region RegionOne volume public http://controller1:8776/v1/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | ffcf850701fe4199bc40e0a3ad2b8b1b         |
| interface    | public                                   |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 33fa43d4e0f14b209b7ee90ef1e424a4         |
| service_name | cinder                                   |
| service_type | volume                                   |
| url          | http://controller1:8776/v1/%(tenant_id)s |
+--------------+------------------------------------------+
[root@comtroller1 ~]# openstack endpoint create --region RegionOne volume internal http://controller1:8776/v1/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | d040e9fa081741edb72e1a22caae54e0         |
| interface    | internal                                 |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 33fa43d4e0f14b209b7ee90ef1e424a4         |
| service_name | cinder                                   |
| service_type | volume                                   |
| url          | http://controller1:8776/v1/%(tenant_id)s |
+--------------+------------------------------------------+
[root@comtroller1 ~]# openstack endpoint create --region RegionOne volume admin http://controller1:8776/v1/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 562a36debdff48d68070f31c11780038         |
| interface    | admin                                    |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 33fa43d4e0f14b209b7ee90ef1e424a4         |
| service_name | cinder                                   |
| service_type | volume                                   |
| url          | http://controller1:8776/v1/%(tenant_id)s |
+--------------+------------------------------------------+
[root@comtroller1 ~]# openstack endpoint create --region RegionOne volumev2 public http://controller1:8776/v2/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 8215d81d0a1046059ac1581c598e1bde         |
| interface    | public                                   |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 0f48b8a432dc4701a73640fe68987d37         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller1:8776/v2/%(tenant_id)s |
+--------------+------------------------------------------+
[root@comtroller1 ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 0d4659dea7c14d75a56bbda84d9fd5c7         |
| interface    | internal                                 |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 0f48b8a432dc4701a73640fe68987d37         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller1:8776/v2/%(tenant_id)s |
+--------------+------------------------------------------+
[root@comtroller1 ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 615cf54933b04ca7a7a49bb06013abff         |
| interface    | admin                                    |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 0f48b8a432dc4701a73640fe68987d37         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller1:8776/v2/%(tenant_id)s |
+--------------+------------------------------------------+



8.2.4安装配置组件

[root@comtroller1 ~]# yum install openstack-cinder python-cinderclient -y
[root@comtroller1 ~]# vi /etc/cinder/cinder.conf 
[database]
connection = mysql://cinder:cinder@controller1/cinder
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = controller1
rabbit_userid = openstack
rabbit_password = openstack
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken] ###注释此模块下其他配置项
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[DEFAULT]
my_ip = 10.0.0.11
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[DEFAULT] ##可选,用于排错排错
verbose = True


8.2.5初始化数据库

[root@comtroller1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
No handlers could be found for logger "oslo_config.cfg"
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) [‘use_tpool‘] not supported
  exception.NotSupportedWarning
2016-08-03 08:24:21.797 2556 INFO migrate.versioning.api [-] 0 -> 1... 
2016-08-03 08:24:22.421 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.421 2556 INFO migrate.versioning.api [-] 1 -> 2... 
2016-08-03 08:24:22.584 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.584 2556 INFO migrate.versioning.api [-] 2 -> 3... 
2016-08-03 08:24:22.698 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.698 2556 INFO migrate.versioning.api [-] 3 -> 4... 
2016-08-03 08:24:22.917 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.917 2556 INFO migrate.versioning.api [-] 4 -> 5... 
2016-08-03 08:24:22.954 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.954 2556 INFO migrate.versioning.api [-] 5 -> 6... 
2016-08-03 08:24:22.986 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:22.986 2556 INFO migrate.versioning.api [-] 6 -> 7... 
2016-08-03 08:24:23.020 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.020 2556 INFO migrate.versioning.api [-] 7 -> 8... 
2016-08-03 08:24:23.042 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.042 2556 INFO migrate.versioning.api [-] 8 -> 9... 
2016-08-03 08:24:23.073 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.073 2556 INFO migrate.versioning.api [-] 9 -> 10... 
2016-08-03 08:24:23.102 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.102 2556 INFO migrate.versioning.api [-] 10 -> 11... 
2016-08-03 08:24:23.142 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.142 2556 INFO migrate.versioning.api [-] 11 -> 12... 
2016-08-03 08:24:23.170 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.171 2556 INFO migrate.versioning.api [-] 12 -> 13... 
2016-08-03 08:24:23.201 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.201 2556 INFO migrate.versioning.api [-] 13 -> 14... 
2016-08-03 08:24:23.230 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.230 2556 INFO migrate.versioning.api [-] 14 -> 15... 
2016-08-03 08:24:23.244 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.244 2556 INFO migrate.versioning.api [-] 15 -> 16... 
2016-08-03 08:24:23.274 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.274 2556 INFO migrate.versioning.api [-] 16 -> 17... 
2016-08-03 08:24:23.367 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.367 2556 INFO migrate.versioning.api [-] 17 -> 18... 
2016-08-03 08:24:23.453 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.453 2556 INFO migrate.versioning.api [-] 18 -> 19... 
2016-08-03 08:24:23.492 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.493 2556 INFO migrate.versioning.api [-] 19 -> 20... 
2016-08-03 08:24:23.522 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.522 2556 INFO migrate.versioning.api [-] 20 -> 21... 
2016-08-03 08:24:23.544 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.544 2556 INFO migrate.versioning.api [-] 21 -> 22... 
2016-08-03 08:24:23.577 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.577 2556 INFO migrate.versioning.api [-] 22 -> 23... 
2016-08-03 08:24:23.619 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.619 2556 INFO migrate.versioning.api [-] 23 -> 24... 
2016-08-03 08:24:23.694 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.694 2556 INFO migrate.versioning.api [-] 24 -> 25... 
2016-08-03 08:24:23.858 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.859 2556 INFO migrate.versioning.api [-] 25 -> 26... 
2016-08-03 08:24:23.873 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.873 2556 INFO migrate.versioning.api [-] 26 -> 27... 
2016-08-03 08:24:23.880 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.880 2556 INFO migrate.versioning.api [-] 27 -> 28... 
2016-08-03 08:24:23.888 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.888 2556 INFO migrate.versioning.api [-] 28 -> 29... 
2016-08-03 08:24:23.895 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.895 2556 INFO migrate.versioning.api [-] 29 -> 30... 
2016-08-03 08:24:23.903 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.903 2556 INFO migrate.versioning.api [-] 30 -> 31... 
2016-08-03 08:24:23.909 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.910 2556 INFO migrate.versioning.api [-] 31 -> 32... 
2016-08-03 08:24:23.962 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:23.963 2556 INFO migrate.versioning.api [-] 32 -> 33... 
/usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py:2999: SAWarning: Table ‘encryption‘ specifies columns ‘volume_type_id‘ as primary_key=True, not matching locally specified columns ‘encryption_id‘; setting the current primary key columns to ‘encryption_id‘. This warning may become an exception in a future release
  ", ".join("‘%s‘" % c.name for c in self.columns)
2016-08-03 08:24:24.098 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.098 2556 INFO migrate.versioning.api [-] 33 -> 34... 
2016-08-03 08:24:24.155 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.155 2556 INFO migrate.versioning.api [-] 34 -> 35... 
2016-08-03 08:24:24.205 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.205 2556 INFO migrate.versioning.api [-] 35 -> 36... 
2016-08-03 08:24:24.246 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.247 2556 INFO migrate.versioning.api [-] 36 -> 37... 
2016-08-03 08:24:24.277 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.277 2556 INFO migrate.versioning.api [-] 37 -> 38... 
2016-08-03 08:24:24.328 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.329 2556 INFO migrate.versioning.api [-] 38 -> 39... 
2016-08-03 08:24:24.358 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.358 2556 INFO migrate.versioning.api [-] 39 -> 40... 
2016-08-03 08:24:24.508 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.508 2556 INFO migrate.versioning.api [-] 40 -> 41... 
2016-08-03 08:24:24.560 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.561 2556 INFO migrate.versioning.api [-] 41 -> 42... 
2016-08-03 08:24:24.567 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.567 2556 INFO migrate.versioning.api [-] 42 -> 43... 
2016-08-03 08:24:24.574 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.575 2556 INFO migrate.versioning.api [-] 43 -> 44... 
2016-08-03 08:24:24.581 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.581 2556 INFO migrate.versioning.api [-] 44 -> 45... 
2016-08-03 08:24:24.588 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.588 2556 INFO migrate.versioning.api [-] 45 -> 46... 
2016-08-03 08:24:24.597 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.597 2556 INFO migrate.versioning.api [-] 46 -> 47... 
2016-08-03 08:24:24.611 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.611 2556 INFO migrate.versioning.api [-] 47 -> 48... 
2016-08-03 08:24:24.673 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.674 2556 INFO migrate.versioning.api [-] 48 -> 49... 
2016-08-03 08:24:24.721 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.721 2556 INFO migrate.versioning.api [-] 49 -> 50... 
2016-08-03 08:24:24.754 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.754 2556 INFO migrate.versioning.api [-] 50 -> 51... 
2016-08-03 08:24:24.794 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.794 2556 INFO migrate.versioning.api [-] 51 -> 52... 
2016-08-03 08:24:24.836 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.836 2556 INFO migrate.versioning.api [-] 52 -> 53... 
2016-08-03 08:24:24.954 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.954 2556 INFO migrate.versioning.api [-] 53 -> 54... 
2016-08-03 08:24:24.987 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:24.987 2556 INFO migrate.versioning.api [-] 54 -> 55... 
2016-08-03 08:24:25.058 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.058 2556 INFO migrate.versioning.api [-] 55 -> 56... 
2016-08-03 08:24:25.064 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.064 2556 INFO migrate.versioning.api [-] 56 -> 57... 
2016-08-03 08:24:25.075 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.075 2556 INFO migrate.versioning.api [-] 57 -> 58... 
2016-08-03 08:24:25.082 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.082 2556 INFO migrate.versioning.api [-] 58 -> 59... 
2016-08-03 08:24:25.089 2556 INFO migrate.versioning.api [-] done
2016-08-03 08:24:25.089 2556 INFO migrate.versioning.api [-] 59 -> 60... 
2016-08-03 08:24:25.097 2556 INFO migrate.versioning.api [-] done



8.2.6配置计算节点使用块存储

[root@comtroller1 ~]# vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne


8.2.7启动服务并配置自动启动

[root@comtroller1 ~]# systemctl restart openstack-nova-api.service
[root@comtroller1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@comtroller1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service



8.3配置block1(cinder)节点

8.3.1安装并启动LVM服务

[root@block1 ~]# yum install lvm2
[root@block1 ~]# systemctl is-enabled lvm2-lvmetad
disabled
[root@block1 ~]# systemctl enable lvm2-lvmetad.service
Created symlink from /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.service to /usr/lib/systemd/system/lvm2-lvmetad.service.
[root@block1 ~]# systemctl start lvm2-lvmetad.service
[root@block1 ~]# systemctl is-enabled lvm2-lvmetad
enabled

8.3.2创建lvm卷

[root@block1 ~]# ll /dev/sd*
brw-rw---- 1 root disk 8,  0 Aug  3 08:10 /dev/sda
brw-rw---- 1 root disk 8,  1 Aug  3 08:10 /dev/sda1
brw-rw---- 1 root disk 8,  2 Aug  3 08:10 /dev/sda2
brw-rw---- 1 root disk 8, 16 Aug  3 08:10 /dev/sdb
[root@block1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
[root@block1 ~]# vgcreate cinder-volumes /dev/sdb
  Volume group "cinder-volumes" successfully created


8.3.3配置LVM过滤

8.3.3.1如果块存储节点或计算节点OS磁盘有使用LVM,需做同样配置。

[root@block1 ~]# vi /etc/lvm/lvm.conf  ##块存储节点
devices {
...
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
...
}
[root@compute1 ~]# vi /etc/lvm/lvm.conf  ##计算节点
devices {
...
filter = [ "a/sda/", "r/.*/"]
...
}


8.3.3.2如果块存储节点或计算节点OS磁盘没有使用LVM,则仅需如下配置

[root@block1 ~]# vi /etc/lvm/lvm.conf  ##块存储节点
devices {
...
filter = [ "a/sdb/", "r/.*/"]
...
}


8.3.4安装配置组件

[root@block1 ~]# yum install openstack-cinder targetcli python-oslo-policy -y 
[root@block1 ~]# vi /etc/cinder/cinder.conf
[database]
connection = mysql://cinder:cinder@controller1/cinder
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = controller1
rabbit_userid = openstack
rabbit_password = openstack
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[DEFAULT]
my_ip = 10.0.0.41
[lvm]  ####注意注意注意,配置文件中没有[lvm]配置组,需要创建,不能修改现有配置,否则云盘无法挂载到实例
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[DEFAULT]
enabled_backends = lvm
[DEFAULT]
glance_host = controller1
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[DEFAULT]
verbose = True


8.3.5启动服务并设置自启动

[root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@block1 ~]# systemctl start openstack-cinder-volume.service target.service
[root@block1 ~]# systemctl status openstack-cinder-volume.service target.service


8.4验证

[root@comtroller1 ~]# . admin-openrc.sh 
[root@comtroller1 ~]# cinder service-list
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |     Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | comtroller1 | nova | enabled |   up  | 2016-08-03T02:16:50.000000 |        -        |
|  cinder-volume   |  block1@lvm | nova | enabled |   up  | 2016-08-03T02:16:47.000000 |        -        |
+------------------+-------------+------+---------+-------+----------------------------+-----------------+


8.5创建cinder卷并附加到实例

[root@comtroller1 ~]# source demo-openrc.sh 
[root@comtroller1 ~]# cinder create --display-name volume1 1 ##创建一个1GB卷
+---------------------------------------+--------------------------------------+
|                Property               |                Value                 |
+---------------------------------------+--------------------------------------+
|              attachments              |                  []                  |
|           availability_zone           |                 nova                 |
|                bootable               |                false                 |
|          consistencygroup_id          |                 None                 |
|               created_at              |      2016-08-03T02:19:41.000000      |
|              description              |                 None                 |
|               encrypted               |                False                 |
|                   id                  | 2797bd27-a039-4821-903a-760571365650 |
|                metadata               |                  {}                  |
|              multiattach              |                False                 |
|                  name                 |               volume1                |
|      os-vol-tenant-attr:tenant_id     |   db6bcde12cc947119ecab8c211fa4f35   |
|   os-volume-replication:driver_data   |                 None                 |
| os-volume-replication:extended_status |                 None                 |
|           replication_status          |               disabled               |
|                  size                 |                  1                   |
|              snapshot_id              |                 None                 |
|              source_volid             |                 None                 |
|                 status                |               creating               |
|                user_id                |   3361e8c44fc94b63ac44049542129edc   |
|              volume_type              |                 None                 |
+---------------------------------------+--------------------------------------+
[root@comtroller1 ~]# cinder list
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+
|                  ID                  |   Status  |   Name  | Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+
| 2797bd27-a039-4821-903a-760571365650 | available | volume1 |  1   |      -      |  false   |    False    |             |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+
[root@comtroller1 ~]# nova list
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name             | Status | Task State | Power State | Networks                          |
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
| 4aa43e3a-c963-4a53-b500-78fa6a6872c5 | private-instance | ACTIVE | -          | Running     | private=172.16.1.3, 192.168.1.242 |
+--------------------------------------+------------------+--------+------------+-------------+-----------------------------------+
[root@comtroller1 ~]# nova volume-attach private-instance 02b07808-8538-4ac2-9f23-57c7e3a23132
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 02b07808-8538-4ac2-9f23-57c7e3a23132 |
| serverId | 4aa43e3a-c963-4a53-b500-78fa6a6872c5 |
| volumeId | 02b07808-8538-4ac2-9f23-57c7e3a23132 |
+----------+--------------------------------------+
[root@comtroller1 ~]# nova volume-list
WARNING: Command volume-list is deprecated and will be removed after Nova 13.0.0 is released. Use python-cinderclient or openstackclient instead.
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| ID                                   | Status | Display Name | Size | Volume Type | Attached to                          |
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
| 02b07808-8538-4ac2-9f23-57c7e3a23132 | in-use | volume1      | 1    | -           | 4aa43e3a-c963-4a53-b500-78fa6a6872c5 |
+--------------------------------------+--------+--------------+------+-------------+--------------------------------------+
[root@comtroller1 ~]# ssh cirros@192.168.1.242
$ sudo fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *       16065     2088449     1036192+  83  Linux
Disk /dev/vdb: 1073 MB, 1073741824 bytes
16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/vdb doesn‘t contain a valid partition table


本文出自 “Eshin” 博客,请务必保留此出处http://eshin.blog.51cto.com/422646/1834364

openstack安装(liberty)--安装块存储服务(Block Storage service/cinder)

标签:identified   service   防火墙   数据库   主机   

原文地址:http://eshin.blog.51cto.com/422646/1834364

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!