标签:ceph kilo mon openstack 整合
第7章 Ceph 整合OpenStack 遇到问题解决 7.1 一个日志引发的错误追踪 1) Ceph 问题起因 http://bbs.ceph.org.cn/question/161 错误日志 2) 找到 vim nova/virt/libvirt/driver.py 代码处 3090 行 ************************ def _get_guest_disk_config(self, instance, name, disk_mapping, inst_type, image_type=None): if CONF.libvirt.hw_disk_discard: if not self._host.has_min_version(MIN_LIBVIRT_DISCARD_VERSION, MIN_QEMU_DISCARD_VERSION, REQ_HYPERVISOR_DISCARD): msg = (_(‘Volume sets discard option, but libvirt %(libvirt)s‘ ‘ or later is required, qemu %(qemu)s‘ ‘ or later is required.‘) % {‘libvirt‘: MIN_LIBVIRT_DISCARD_VERSION, ‘qemu‘: MIN_QEMU_DISCARD_VERSION}) raise exception.Invalid(msg) else: pass image = self.image_backend.image(instance, name, image_type) disk_info = disk_mapping[name] return image.libvirt_info(disk_info[‘bus‘], disk_info[‘dev‘], disk_info[‘type‘], self.disk_cachemode, inst_type[‘extra_specs‘], self._host.get_version()) ************************ 对比之后的代码修改 ************************ def _get_guest_disk_config(self, instance, name, disk_mapping, inst_type, image_type=None): image = self.image_backend.image(instance, name, image_type) disk_info = disk_mapping[name] return image.libvirt_info(disk_info[‘bus‘], disk_info[‘dev‘], disk_info[‘type‘], self.disk_cachemode, inst_type[‘extra_specs‘], self._host.get_version()) ******************************** 3) 继续创建云主机,看是否报其他错误信息 2015-08-11 01:47:10.456 82044 ERROR nova.virt.libvirt.driver [req-27f7ade9-3142-4ec6-815d-84488c6e0201 - - - - -] Error launching a defined domain with XML: <domain type=‘kvm‘> <name>instance-000000d9</name> <uuid>4b525800-3e00-4b48-a997-c104f919cde3</uuid> <metadata> <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0"> <nova:package version="2015.1.0-3.el7"/> <nova:name>exta</nova:name> <nova:creationTime>2015-08-10 17:47:09</nova:creationTime> <nova:flavor name="linux-8-8-50"> <nova:memory>8192</nova:memory> <nova:disk>120</nova:disk> <nova:swap>0</nova:swap> <nova:ephemeral>50</nova:ephemeral> <nova:vcpus>8</nova:vcpus> </nova:flavor> <nova:owner> <nova:user uuid="95a96f0ddcf449239c6682a3c310857e">root</nova:user> <nova:project uuid="be27eb2862904a0f9c636c337f66709c">admin</nova:project> </nova:owner> <nova:root type="image" uuid="59e1c70b-70c8-4c22-9253-fc889f94d891"/> </nova:instance> </metadata> <memory unit=‘KiB‘>8388608</memory> <currentMemory unit=‘KiB‘>8388608</currentMemory> <vcpu placement=‘static‘ cpuset=‘0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38‘>8</vcpu> <cputune> <shares>8192</shares> </cputune> <sysinfo type=‘smbios‘> <system> <entry name=‘manufacturer‘>Fedora Project</entry> <entry name=‘product‘>OpenStack Nova</entry> <entry name=‘version‘>2015.1.0-3.el7</entry> <entry name=‘serial‘>09c6f9d1-825f-43e2-8774-5ed6705af12b</entry> <entry name=‘uuid‘>4b525800-3e00-4b48-a997-c104f919cde3</entry> </system> </sysinfo> <os> <type arch=‘x86_64‘ machine=‘pc-i440fx-rhel7.0.0‘>hvm</type> <boot dev=‘hd‘/> <smbios mode=‘sysinfo‘/> </os> <features> <acpi/> <apic/> </features> <cpu mode=‘host-model‘> <model fallback=‘allow‘/> <topology sockets=‘8‘ cores=‘1‘ threads=‘1‘/> </cpu> <clock offset=‘utc‘> <timer name=‘pit‘ tickpolicy=‘delay‘/> <timer name=‘rtc‘ tickpolicy=‘catchup‘/> <timer name=‘hpet‘ present=‘no‘/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type=‘network‘ device=‘disk‘> <driver name=‘qemu‘ type=‘raw‘ cache=‘writeback‘/> <source protocol=‘rbd‘ name=‘vms/4b525800-3e00-4b48-a997-c104f919cde3_disk‘> <host name=‘192.168.103.211‘ port=‘6789‘/> <host name=‘192.168.103.212‘ port=‘6789‘/> <host name=‘192.168.103.214‘ port=‘6789‘/> </source> <target dev=‘vda‘ bus=‘virtio‘/> <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x04‘ function=‘0x0‘/> </disk> <disk type=‘network‘ device=‘disk‘> <driver name=‘qemu‘ type=‘raw‘ cache=‘writeback‘/> <source protocol=‘rbd‘ name=‘vms/4b525800-3e00-4b48-a997-c104f919cde3_disk.local‘> <host name=‘192.168.103.211‘ port=‘6789‘/> <host name=‘192.168.103.212‘ port=‘6789‘/> <host name=‘192.168.103.214‘ port=‘6789‘/> </source> <target dev=‘vdb‘ bus=‘virtio‘/> <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x05‘ function=‘0x0‘/> </disk> <controller type=‘usb‘ index=‘0‘> <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x01‘ function=‘0x2‘/> </controller> <controller type=‘pci‘ index=‘0‘ model=‘pci-root‘/> <interface type=‘bridge‘> <mac address=‘fa:16:3e:19:60:f5‘/> <source bridge=‘br100‘/> <model type=‘virtio‘/> <filterref filter=‘nova-instance-instance-000000d9-fa163e1960f5‘/> <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x03‘ function=‘0x0‘/> </interface> <serial type=‘file‘> <source path=‘/data/nova/instances/4b525800-3e00-4b48-a997-c104f919cde3/console.log‘/> <target port=‘0‘/> </serial> <serial type=‘pty‘> <target port=‘1‘/> </serial> <console type=‘file‘> <source path=‘/data/nova/instances/4b525800-3e00-4b48-a997-c104f919cde3/console.log‘/> <target type=‘serial‘ port=‘0‘/> </console> <input type=‘tablet‘ bus=‘usb‘/> <input type=‘mouse‘ bus=‘ps2‘/> <input type=‘keyboard‘ bus=‘ps2‘/> <graphics type=‘vnc‘ port=‘-1‘ autoport=‘yes‘ listen=‘0.0.0.0‘ keymap=‘en-us‘> <listen type=‘address‘ address=‘0.0.0.0‘/> </graphics> <video> <model type=‘cirrus‘ vram=‘16384‘ heads=‘1‘/> <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x02‘ function=‘0x0‘/> </video> <memballoon model=‘virtio‘> <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x06‘ function=‘0x0‘/> <stats period=‘10‘/> </memballoon> </devices> </domain> 2015-08-11 01:47:10.457 82044 ERROR nova.compute.manager [req-27f7ade9-3142-4ec6-815d-84488c6e0201 - - - - -] [instance: 4b525800-3e00-4b48-a997-c104f919cde3] Instance failed to spawn 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] Traceback (most recent call last): 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2442, in _build_resources 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] yield resources 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2314, in _build_and_run_instance 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] block_device_info=block_device_info) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2354, in spawn 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] block_device_info=block_device_info) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4380, in _create_domain_and_network 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] power_on=power_on) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4311, in _create_domain 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] LOG.error(err) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__ 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] six.reraise(self.type_, self.value, self.tb) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4301, in _create_domain 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] domain.createWithFlags(launch_flags) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] result = proxy_call(self._autowrap, f, *args, **kwargs) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] rv = execute(f, *args, **kwargs) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] six.reraise(c, e, tb) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] rv = meth(*args, **kwargs) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 996, in createWithFlags 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] if ret == -1: raise libvirtError (‘virDomainCreateWithFlags() failed‘, dom=self) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] libvirtError: 内部错误:无法获得对 ACL 技术驱动程序 ‘ebiptables‘ 的访问 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] 2015-08-11 01:47:10.460 82044 INFO nova.compute.manager [req-210e07f7-cba3-4dec-b2d5-b7be95c2f559 95a96f0ddcf449239c6682a3c310857e be27eb2862904a0f9c636c337f66709c - - -] [instance: 4b525800-3e00-4b48-a997-c104f919cde3] Terminating instance 问题分析: 在程序中打断点,手动启动云主机: vim nova/virt/libvirt/driver.py:4284行 *********************************************** def _create_domain(self, xml=None, domain=None, instance=None, launch_flags=0, power_on=True): """Create a domain. Either domain or xml must be passed in. If both are passed, then the domain definition is overwritten from the xml. """ import ipdb;ipdb.set_trace() err = None try: if xml: err = _LE(‘Error defining a domain with XML: %s‘) % xml domain = self._conn.defineXML(xml) if power_on: err = _LE(‘Error launching a defined domain with XML: %s‘) % encodeutils.safe_decode(domain.XMLDesc(0), errors=‘ignore‘) domain.createWithFlags(launch_flags) if not utils.is_neutron(): err = _LE(‘Error enabling hairpin mode with XML: %s‘) % encodeutils.safe_decode(domain.XMLDesc(0), errors=‘ignore‘) self._enable_hairpin(domain.XMLDesc(0)) *********************************************** 关闭nova-compute 服务器,启用ipdb service opensack-nova-compute stop ipdb /usr/bin/nova-compute --config-file=/etc/nova/nova.conf 获取UUID ,查看ceph vms pool 池,是否创建images [root@athcontroller103210 nova]# rbd ls vms 175dc9db-2409-4b34-b6ca-efc0a1788687_disk 175dc9db-2409-4b34-b6ca-efc0a1788687_disk.local 5220f145-73d1-4831-872f-cfd32b09dd20_disk 5220f145-73d1-4831-872f-cfd32b09dd20_disk.local 847c6dea-a887-4fad-8acd-36f13bc29b57_disk 847c6dea-a887-4fad-8acd-36f13bc29b57_disk.local d0adb3ea-7c4d-44d5-b4d6-a9a02a3c3468_disk d0adb3ea-7c4d-44d5-b4d6-a9a02a3c3468_disk.local 手动启动云主机,进入/data/nova/instances/UUID/中 virsh create libvirt.xml virsh create 启动报错 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] if ret == -1: raise libvirtError (‘virDomainCreateWithFlags() failed‘, dom=self) 2015-08-11 01:47:10.457 82044 TRACE nova.compute.manager [instance: 4b525800-3e00-4b48-a997-c104f919cde3] libvirtError: 内部错误:无法获得对 ACL 技术驱动程序 ‘ebiptables‘ 的访问 根据错误信息提示,去除nwfilter 去除instanes nwfilter 问题,写入log,后期解决ebiptables 功能 *********************************************** vim nova/virt/libvirt/config.py : 1196 行 1196 if self.filtername is not None: 1197 filter = etree.Element("filterref", filter=self.filtername) 1198 for p in self.filterparams: 1199 filter.append(etree.Element("parameter", 1200 name=p[‘key‘], 1201 value=p[‘value‘])) 1202 #dev.append(filter) 1203 LOG.info("Add William: Delete nwfilter rule %s" %filter) *********************************************** 重新进入ipdb 模式,创建3台太云主机 2015-08-11 03:06:39.834 122763 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘) 2015-08-11 03:06:39.956 122763 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘) 2015-08-11 03:06:40.064 122763 INFO nova.scheduler.client.report [-] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘) 2015-08-11 03:07:23.798 122763 INFO nova.virt.libvirt.config [req-0de0fe4b-7c21-4e4f-9172-a63c65c26bd8 - - - - -] Add William: Delete nwfilter rule <Element filterref at 0x4eb3fa0> 2015-08-11 03:07:23.814 122763 INFO nova.virt.libvirt.firewall [req-0de0fe4b-7c21-4e4f-9172-a63c65c26bd8 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] Called setup_basic_filtering in nwfilter 2015-08-11 03:07:23.815 122763 INFO nova.virt.libvirt.firewall [req-0de0fe4b-7c21-4e4f-9172-a63c65c26bd8 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] Ensuring static filters > /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py(4292)_create_domain() 4291 import ipdb;ipdb.set_trace() -> 4292 err = None 4293 try: ipdb> c 2015-08-11 03:08:02.499 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Auditing locally available compute resources for node athcontroller103210.sjz.autohome.com.cn 2015-08-11 03:08:02.922 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Total usable vcpus: 40, total allocated vcpus: 8 2015-08-11 03:08:02.923 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Final resource view: name=athcontroller103210.sjz.autohome.com.cn phys_ram=257680MB used_ram=25088MB phys_disk=30137GB used_disk=510GB total_vcpus=40 used_vcpus=8 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x4c0c450> 2015-08-11 03:08:02.939 122763 INFO nova.scheduler.client.report [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Compute_service record updated for (‘athcontroller103210.sjz.autohome.com.cn‘, ‘athcontroller103210.sjz.autohome.com.cn‘) 2015-08-11 03:08:02.939 122763 INFO nova.compute.resource_tracker [req-28fc26e5-9ded-4394-a211-494c3cd20d87 - - - - -] Compute_service record updated for athcontroller103210.sjz.autohome.com.cn:athcontroller103210.sjz.autohome.com.cn 2015-08-11 03:08:03.071 122763 INFO nova.virt.libvirt.config [req-3a9ea8b7-a740-4f02-a094-dabd8b2d96b3 - - - - -] Add William: Delete nwfilter rule <Element filterref at 0x5204870> 2015-08-11 03:08:03.072 122763 INFO nova.virt.libvirt.firewall [req-3a9ea8b7-a740-4f02-a094-dabd8b2d96b3 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] Called setup_basic_filtering in nwfilter 2015-08-11 03:08:03.073 122763 INFO nova.virt.libvirt.firewall [req-3a9ea8b7-a740-4f02-a094-dabd8b2d96b3 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] Ensuring static filters 2015-08-11 03:08:03.109 122763 INFO nova.virt.libvirt.config [req-871ea11a-af06-4944-ab84-101babc1351d - - - - -] Add William: Delete nwfilter rule <Element filterref at 0x5204870> 2015-08-11 03:08:03.110 122763 INFO nova.virt.libvirt.firewall [req-871ea11a-af06-4944-ab84-101babc1351d - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] Called setup_basic_filtering in nwfilter 2015-08-11 03:08:03.110 122763 INFO nova.virt.libvirt.firewall [req-871ea11a-af06-4944-ab84-101babc1351d - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] Ensuring static filters 2015-08-11 03:08:03.260 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] VM Started (Lifecycle Event) 2015-08-11 03:08:03.287 122763 INFO nova.virt.libvirt.driver [-] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] Instance spawned successfully. > /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py(4292)_create_domain() 4291 import ipdb;ipdb.set_trace() -> 4292 err = None 4293 try: ipdb> c 2015-08-11 03:08:05.152 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 175dc9db-2409-4b34-b6ca-efc0a1788687] During sync_power_state the instance has a pending task (spawning). Skip. > /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py(4292)_create_domain() 4291 import ipdb;ipdb.set_trace() -> 4292 err = None 4293 try: ipdb> c 2015-08-11 03:08:06.597 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] VM Started (Lifecycle Event) 2015-08-11 03:08:06.613 122763 INFO nova.virt.libvirt.driver [-] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] Instance spawned successfully. 2015-08-11 03:08:06.670 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 5220f145-73d1-4831-872f-cfd32b09dd20] During sync_power_state the instance has a pending task (spawning). Skip. 2015-08-11 03:08:07.438 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] VM Started (Lifecycle Event) 2015-08-11 03:08:07.453 122763 INFO nova.virt.libvirt.driver [-] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] Instance spawned successfully. 2015-08-11 03:08:07.529 122763 INFO nova.compute.manager [req-e26e03ea-5ed6-4a47-8c18-15fa343eb505 - - - - -] [instance: 847c6dea-a887-4fad-8acd-36f13bc29b57] During sync_power_state the instance has a pending task (spawning). Skip. 云主机创建完成
本文出自 “欢迎评论,欢迎点赞” 博客,请务必保留此出处http://swq499809608.blog.51cto.com/797714/1683494
标签:ceph kilo mon openstack 整合
原文地址:http://swq499809608.blog.51cto.com/797714/1683494