码迷,mamicode.com
首页 > 其他好文 > 详细

3.ceph安装包明细及deploy过程输出

时间:2015-09-11 17:38:48      阅读:353      评论:0      收藏:0      [点我收藏+]

标签:ceph依赖包

1:安装ceph所需要的包及依赖

Installing:
[master][DEBUG ]  ceph                                x86_64 1:0.94.3-0.el6    Ceph         21 M
[master][DEBUG ]  ceph-radosgw                        x86_64 1:0.94.3-0.el6    Ceph        2.3 M
[master][DEBUG ] Installing for dependencies:
[master][DEBUG ]  boost-program-options               x86_64 1.41.0-27.el6     base        108 k
[master][DEBUG ]  boost-system                        x86_64 1.41.0-27.el6     base         26 k
[master][DEBUG ]  boost-thread                        x86_64 1.41.0-27.el6     base         43 k
[master][DEBUG ]  ceph-common                         x86_64 1:0.94.3-0.el6    Ceph        6.8 M
[master][DEBUG ]  fcgi                                x86_64 2.4.0-10.el6      Ceph         40 k
[master][DEBUG ]  gdisk                               x86_64 0.8.2-1.el6       Ceph        163 k
[master][DEBUG ]  gperftools-libs                     x86_64 2.0-11.el6.3      Ceph        246 k
[master][DEBUG ]  leveldb                             x86_64 1.7.0-2.el6       Ceph        158 k
[master][DEBUG ]  libcephfs1                          x86_64 1:0.94.3-0.el6    Ceph        1.9 M
[master][DEBUG ]  libicu                              x86_64 4.2.1-12.el6      base        4.9 M
[master][DEBUG ]  librados2                           x86_64 1:0.94.3-0.el6    Ceph        1.6 M
[master][DEBUG ]  librbd1                             x86_64 1:0.94.3-0.el6    Ceph        1.8 M
[master][DEBUG ]  libunwind                           x86_64 1.1-3.el6         epel         55 k
[master][DEBUG ]  lttng-ust                           x86_64 2.4.1-1.el6       epel        162 k
[master][DEBUG ]  python-argparse                     noarch 1.2.1-2.el6       Ceph-noarch  48 k
[master][DEBUG ]  python-babel                        noarch 0.9.4-5.1.el6     base        1.4 M
[master][DEBUG ]  python-backports                    x86_64 1.0-5.el6         base        5.5 k
[master][DEBUG ]  python-backports-ssl_match_hostname noarch 3.4.0.2-1.el6     Ceph-noarch  12 k
[master][DEBUG ]  python-cephfs                       x86_64 1:0.94.3-0.el6    Ceph         11 k
[master][DEBUG ]  python-chardet                      noarch 2.0.1-1.el6       Ceph-noarch 225 k
[master][DEBUG ]  python-docutils                     noarch 0.6-1.el6         base        1.3 M
[master][DEBUG ]  python-flask                        noarch 1:0.9-5.el6       Ceph-noarch 190 k
[master][DEBUG ]  python-imaging                      x86_64 1.1.6-19.el6      base        388 k
[master][DEBUG ]  python-jinja2                       x86_64 2.2.1-2.el6_5     base        466 k
[master][DEBUG ]  python-jinja2-26                    noarch 2.6-2.el6         Ceph-noarch 526 k
[master][DEBUG ]  python-markupsafe                   x86_64 0.9.2-4.el6       base         22 k
[master][DEBUG ]  python-ordereddict                  noarch 1.1-2.el6         Ceph-noarch 7.6 k
[master][DEBUG ]  python-pygments                     noarch 1.1.1-1.el6       base        562 k
[master][DEBUG ]  python-rados                        x86_64 1:0.94.3-0.el6    Ceph         29 k
[master][DEBUG ]  python-rbd                          x86_64 1:0.94.3-0.el6    Ceph         18 k
[master][DEBUG ]  python-requests                     noarch 1.1.0-4.el6       Ceph-noarch  71 k
[master][DEBUG ]  python-six                          noarch 1.4.1-1.el6       Ceph-noarch  22 k
[master][DEBUG ]  python-sphinx                       noarch 0.6.6-2.el6       base        487 k
[master][DEBUG ]  python-urllib3                      noarch 1.5-7.el6         Ceph-noarch  41 k
[master][DEBUG ]  python-werkzeug                     noarch 0.8.3-2.el6       Ceph-noarch 552 k
[master][DEBUG ]  userspace-rcu                       x86_64 0.7.7-1.el6       epel         60 k
[master][DEBUG ]  xfsprogs                            x86_64 3.1.1-14_ceph.el6 Ceph        724 k
[master][DEBUG ]
[master][DEBUG ] Transaction Summary
[master][DEBUG ] ================================================================================

2:deploy各步骤输出明细

--------------------------------------------------------------------------------------------------------
yum install ceph-deploy
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
epel/metalink                                                                                | 3.9 kB     00:00     
 * base: mirrors.btte.net
 * epel: mirrors.ustc.edu.cn
 * extras: mirror.bit.edu.cn
 * updates: mirror.bit.edu.cn
epel                                                                                         | 4.3 kB     00:00     
epel/primary_db                                                                              | 5.7 MB     00:04     
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package ceph-deploy.noarch 0:1.5.26-0 will be installed
--> Processing Dependency: python-argparse for package: ceph-deploy-1.5.26-0.noarch
--> Running transaction check
---> Package python-argparse.noarch 0:1.2.1-2.1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================
 Package                        Arch                  Version                      Repository                  Size
====================================================================================================================
Installing:
 ceph-deploy                    noarch                1.5.26-0                     Ceph-noarch                279 k
Installing for dependencies:
 python-argparse                noarch                1.2.1-2.1.el6                base                        48 k

Transaction Summary
====================================================================================================================
Install       2 Package(s)

Total download size: 327 k
Installed size: 1.3 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): ceph-deploy-1.5.26-0.noarch.rpm                                                       | 279 kB     00:00     
(2/2): python-argparse-1.2.1-2.1.el6.noarch.rpm                                              |  48 kB     00:00     
--------------------------------------------------------------------------------------------------------------------
Total                                                                               5.6 MB/s | 327 kB     00:00     
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : python-argparse-1.2.1-2.1.el6.noarch                                                             1/2
  Installing : ceph-deploy-1.5.26-0.noarch                                                                      2/2
  Verifying  : ceph-deploy-1.5.26-0.noarch                                                                      1/2
  Verifying  : python-argparse-1.2.1-2.1.el6.noarch                                                             2/2

Installed:
  ceph-deploy.noarch 0:1.5.26-0                                                                                     

Dependency Installed:
  python-argparse.noarch 0:1.2.1-2.1.el6                                                                            

Complete!

--------------------------------------------------------------------------------------------------------------
ceph-deploy install master osd1 osd2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy install master osd1 osd2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x19d3830>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x195ecf8>
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : [‘master‘, ‘osd1‘, ‘osd2‘]
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts master osd1 osd2
[ceph_deploy.install][DEBUG ] Detecting platform for host master ...
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS 6.5 Final
[master][INFO  ] installing ceph on master
[master][INFO  ] Running command: yum clean all
[master][DEBUG ] Loaded plugins: fastestmirror, priorities, security
[master][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[master][DEBUG ] Cleaning up Everything
[master][DEBUG ] Cleaning up list of fastest mirrors
[master][INFO  ] adding EPEL repository
[master][INFO  ] Running command: yum -y install epel-release
[master][DEBUG ] Loaded plugins: fastestmirror, priorities, security
[master][DEBUG ] Determining fastest mirrors
[master][DEBUG ]  * base: mirrors.yun-idc.com
[master][DEBUG ]  * epel: mirror.premi.st
[master][DEBUG ]  * extras: mirrors.yun-idc.com
[master][DEBUG ]  * updates: mirrors.yun-idc.com
[master][DEBUG ] 63 packages excluded due to repository priority protections
[master][DEBUG ] Setting up Install Process
[master][DEBUG ] Package epel-release-6-8.noarch already installed and latest version
[master][DEBUG ] Nothing to do
[master][INFO  ] Running command: yum -y install yum-plugin-priorities
[master][DEBUG ] Loaded plugins: fastestmirror, priorities, security
[master][DEBUG ] Loading mirror speeds from cached hostfile
[master][DEBUG ]  * base: mirrors.yun-idc.com
[master][DEBUG ]  * epel: mirror.premi.st
[master][DEBUG ]  * extras: mirrors.yun-idc.com
[master][DEBUG ]  * updates: mirrors.yun-idc.com
[master][DEBUG ] 63 packages excluded due to repository priority protections
[master][DEBUG ] Setting up Install Process
[master][DEBUG ] Package yum-plugin-priorities-1.1.30-30.el6.noarch already installed and latest version
[master][DEBUG ] Nothing to do
[master][DEBUG ] Configure Yum priorities to include obsoletes
[master][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[master][INFO  ] Running command: rpm --import https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[master][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[master][DEBUG ] Retrieving http://ceph.com/rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[master][DEBUG ] Preparing...                ##################################################
[master][DEBUG ] ceph-release                ##################################################
[master][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[master][WARNIN] altered ceph.repo priorities to contain: priority=1
[master][INFO  ] Running command: yum -y install ceph ceph-radosgw
[master][DEBUG ] Loaded plugins: fastestmirror, priorities, security
[master][DEBUG ] Loading mirror speeds from cached hostfile
[master][DEBUG ]  * base: mirrors.yun-idc.com
[master][DEBUG ]  * epel: mirror.premi.st
[master][DEBUG ]  * extras: mirrors.yun-idc.com
[master][DEBUG ]  * updates: mirrors.yun-idc.com
[master][DEBUG ] 63 packages excluded due to repository priority protections
[master][DEBUG ] Setting up Install Process
[master][DEBUG ] Package 1:ceph-0.94.3-0.el6.x86_64 already installed and latest version
[master][DEBUG ] Package 1:ceph-radosgw-0.94.3-0.el6.x86_64 already installed and latest version
[master][DEBUG ] Nothing to do
[master][INFO  ] Running command: ceph --version



-------------------------------------------------------------------------------------------------------------
ceph-deploy  mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1896fc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x1891398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts master
[ceph_deploy.mon][DEBUG ] detecting platform for host master ...
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[ceph_deploy.mon][INFO  ] distro info: CentOS 6.5 Final
[master][DEBUG ] determining if provided host has same hostname in remote
[master][DEBUG ] get remote short hostname
[master][DEBUG ] deploying mon to master
[master][DEBUG ] get remote short hostname
[master][DEBUG ] remote hostname: master
[master][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[master][DEBUG ] create the mon path if it does not exist
[master][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-master/done
[master][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-master/done
[master][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-master.mon.keyring
[master][DEBUG ] create the monitor keyring file
[master][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i master --keyring /var/lib/ceph/tmp/ceph-master.mon.keyring
[master][DEBUG ] ceph-mon: mon.noname-a 10.0.0.21:6789/0 is local, renaming to mon.master
[master][DEBUG ] ceph-mon: set fsid to e2345a18-c1e1-4079-8dc6-25285231e09d
[master][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-master for mon.master
[master][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-master.mon.keyring
[master][DEBUG ] create a done file to avoid re-doing the mon deployment
[master][DEBUG ] create the init path if it does not exist
[master][DEBUG ] locating the `service` executable...
[master][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.master
[master][DEBUG ] === mon.master ===
[master][DEBUG ] Starting Ceph mon.master on master...
[master][DEBUG ] Starting ceph-create-keys on master...
[master][INFO  ] Running command: chkconfig ceph on
[master][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
[master][DEBUG ] ********************************************************************************
[master][DEBUG ] status for monitor: mon.master
[master][DEBUG ] {
[master][DEBUG ]   "election_epoch": 2,
[master][DEBUG ]   "extra_probe_peers": [],
[master][DEBUG ]   "monmap": {
[master][DEBUG ]     "created": "0.000000",
[master][DEBUG ]     "epoch": 1,
[master][DEBUG ]     "fsid": "e2345a18-c1e1-4079-8dc6-25285231e09d",
[master][DEBUG ]     "modified": "0.000000",
[master][DEBUG ]     "mons": [
[master][DEBUG ]       {
[master][DEBUG ]         "addr": "10.0.0.21:6789/0",
[master][DEBUG ]         "name": "master",
[master][DEBUG ]         "rank": 0
[master][DEBUG ]       }
[master][DEBUG ]     ]
[master][DEBUG ]   },
[master][DEBUG ]   "name": "master",
[master][DEBUG ]   "outside_quorum": [],
[master][DEBUG ]   "quorum": [
[master][DEBUG ]     0
[master][DEBUG ]   ],
[master][DEBUG ]   "rank": 0,
[master][DEBUG ]   "state": "leader",
[master][DEBUG ]   "sync_provider": []
[master][DEBUG ] }
[master][DEBUG ] ********************************************************************************
[master][INFO  ] monitor: mon.master is running
[master][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.master
[master][DEBUG ] connected to host: master
[master][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
[ceph_deploy.mon][INFO  ] mon.master monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][DEBUG ] Checking master for /etc/ceph/ceph.client.admin.keyring
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[master][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from master.
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Checking master for /var/lib/ceph/bootstrap-osd/ceph.keyring
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[master][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from master.
[ceph_deploy.gatherkeys][DEBUG ] Checking master for /var/lib/ceph/bootstrap-mds/ceph.keyring
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[master][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from master.
[ceph_deploy.gatherkeys][DEBUG ] Checking master for /var/lib/ceph/bootstrap-rgw/ceph.keyring
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[master][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from master.
Error in sys.exitfunc:

------------------------------------------------------------------------------------------------------------------------------------------
ceph-deploy  mon create master
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy mon create master
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x15aafc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : [‘master‘]
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x15a5398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts master
[ceph_deploy.mon][DEBUG ] detecting platform for host master ...
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[ceph_deploy.mon][INFO  ] distro info: CentOS 6.5 Final
[master][DEBUG ] determining if provided host has same hostname in remote
[master][DEBUG ] get remote short hostname
[master][DEBUG ] deploying mon to master
[master][DEBUG ] get remote short hostname
[master][DEBUG ] remote hostname: master
[master][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[master][DEBUG ] create the mon path if it does not exist
[master][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-master/done
[master][DEBUG ] create a done file to avoid re-doing the mon deployment
[master][DEBUG ] create the init path if it does not exist
[master][DEBUG ] locating the `service` executable...
[master][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.master
[master][DEBUG ] === mon.master ===
[master][DEBUG ] Starting Ceph mon.master on master...already running
[master][INFO  ] Running command: chkconfig ceph on
[master][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
[master][DEBUG ] ********************************************************************************
[master][DEBUG ] status for monitor: mon.master
[master][DEBUG ] {
[master][DEBUG ]   "election_epoch": 2,
[master][DEBUG ]   "extra_probe_peers": [],
[master][DEBUG ]   "monmap": {
[master][DEBUG ]     "created": "0.000000",
[master][DEBUG ]     "epoch": 1,
[master][DEBUG ]     "fsid": "e2345a18-c1e1-4079-8dc6-25285231e09d",
[master][DEBUG ]     "modified": "0.000000",
[master][DEBUG ]     "mons": [
[master][DEBUG ]       {
[master][DEBUG ]         "addr": "10.0.0.21:6789/0",
[master][DEBUG ]         "name": "master",
[master][DEBUG ]         "rank": 0
[master][DEBUG ]       }
[master][DEBUG ]     ]
[master][DEBUG ]   },
[master][DEBUG ]   "name": "master",
[master][DEBUG ]   "outside_quorum": [],
[master][DEBUG ]   "quorum": [
[master][DEBUG ]     0
[master][DEBUG ]   ],
[master][DEBUG ]   "rank": 0,
[master][DEBUG ]   "state": "leader",
[master][DEBUG ]   "sync_provider": []
[master][DEBUG ] }
[master][DEBUG ] ********************************************************************************
[master][INFO  ] monitor: mon.master is running
[master][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.master.asok mon_status
Error in sys.exitfunc:
-----------------------------------------------------------------------------------------------
ceph-deploy  gatherkeys master
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy gatherkeys master
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x13794d0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : [‘master‘]
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x134f758>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.client.admin.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-rgw.keyring
Error in sys.exitfunc:
----------------------------------------------------
ceph-deploy  osd  prepare osd2:/dev/vdc osd1:/dev/vdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy osd prepare osd2:/dev/vdc osd1:/dev/vdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x21dde18>
[ceph_deploy.cli][INFO  ]  disk                          : [(‘osd2‘, ‘/dev/vdc‘, None), (‘osd1‘, ‘/dev/vdc‘, None)]
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x21d3b90>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks osd2:/dev/vdc: osd1:/dev/vdc:
[osd2][DEBUG ] connected to host: osd2
[osd2][DEBUG ] detect platform information from remote host
[osd2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to osd2
[osd2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[osd2][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host osd2 disk /dev/vdc journal None activate False
[osd2][INFO  ] Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/vdc
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[osd2][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/vdc
[osd2][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/vdc
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:04b38c48-ef2a-4120-a839-4449a6320ca2 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc
[osd2][DEBUG ] Creating new GPT entries.
[osd2][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[osd2][DEBUG ] order to align on 2048-sector boundaries.
[osd2][DEBUG ] The operation has completed successfully.
[osd2][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/vdc
[osd2][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
[osd2][WARNIN] BLKPG: Device or resource busy
[osd2][WARNIN] error adding partition 2
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd2][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/04b38c48-ef2a-4120-a839-4449a6320ca2
[osd2][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/04b38c48-ef2a-4120-a839-4449a6320ca2
[osd2][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/vdc
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:95b35ffa-b954-4add-b775-0b81ed3eaee5 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/vdc
[osd2][DEBUG ] Information: Moved requested sector from 10485761 to 10487808 in
[osd2][DEBUG ] order to align on 2048-sector boundaries.
[osd2][DEBUG ] The operation has completed successfully.
[osd2][WARNIN] INFO:ceph-disk:calling partx on created device /dev/vdc
[osd2][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
[osd2][WARNIN] BLKPG: Device or resource busy
[osd2][WARNIN] error adding partition 1
[osd2][WARNIN] BLKPG: Device or resource busy
[osd2][WARNIN] error adding partition 2
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd2][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/vdc1
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdc1
[osd2][DEBUG ] meta-data=/dev/vdc1              isize=2048   agcount=4, agsize=327615 blks
[osd2][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[osd2][DEBUG ] data     =                       bsize=4096   blocks=1310459, imaxpct=25
[osd2][DEBUG ]          =                       sunit=0      swidth=0 blks
[osd2][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[osd2][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2
[osd2][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[osd2][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[osd2][WARNIN] DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.inbaDA with options noatime,inode64
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.inbaDA
[osd2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.inbaDA
[osd2][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.inbaDA/journal -> /dev/disk/by-partuuid/04b38c48-ef2a-4120-a839-4449a6320ca2
[osd2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.inbaDA
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.inbaDA
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc
[osd2][DEBUG ] The operation has completed successfully.
[osd2][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/vdc
[osd2][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
[osd2][WARNIN] BLKPG: Device or resource busy
[osd2][WARNIN] error adding partition 1
[osd2][WARNIN] BLKPG: Device or resource busy
[osd2][WARNIN] error adding partition 2
[osd2][INFO  ] checking OSD status...
[osd2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host osd2 is now ready for osd use.
[osd1][DEBUG ] connected to host: osd1
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to osd1
[osd1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[osd1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host osd1 disk /dev/vdc journal None activate False
[osd1][INFO  ] Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/vdc
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[osd1][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/vdc
[osd1][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/vdc
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc
[osd1][DEBUG ] Creating new GPT entries.
[osd1][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[osd1][DEBUG ] order to align on 2048-sector boundaries.
[osd1][DEBUG ] The operation has completed successfully.
[osd1][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/vdc
[osd1][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
[osd1][WARNIN] BLKPG: Device or resource busy
[osd1][WARNIN] error adding partition 2
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd1][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4
[osd1][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4
[osd1][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/vdc
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:a203debe-0140-466b-9ee5-c71b476fd1ac --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/vdc
[osd1][DEBUG ] Information: Moved requested sector from 10485761 to 10487808 in
[osd1][DEBUG ] order to align on 2048-sector boundaries.
[osd1][DEBUG ] The operation has completed successfully.
[osd1][WARNIN] INFO:ceph-disk:calling partx on created device /dev/vdc
[osd1][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
[osd1][WARNIN] BLKPG: Device or resource busy
[osd1][WARNIN] error adding partition 1
[osd1][WARNIN] BLKPG: Device or resource busy
[osd1][WARNIN] error adding partition 2
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd1][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/vdc1
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdc1
[osd1][DEBUG ] meta-data=/dev/vdc1              isize=2048   agcount=4, agsize=327615 blks
[osd1][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[osd1][DEBUG ] data     =                       bsize=4096   blocks=1310459, imaxpct=25
[osd1][DEBUG ]          =                       sunit=0      swidth=0 blks
[osd1][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[osd1][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2
[osd1][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[osd1][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[osd1][WARNIN] DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.eYabbH with options noatime,inode64
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.eYabbH
[osd1][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.eYabbH
[osd1][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.eYabbH/journal -> /dev/disk/by-partuuid/3006a8a8-fecf-4608-9f4c-0cc3f02e4cd4
[osd1][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.eYabbH
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.eYabbH
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc
[osd1][DEBUG ] The operation has completed successfully.
[osd1][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/vdc
[osd1][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/vdc
[osd1][WARNIN] BLKPG: Device or resource busy
[osd1][WARNIN] error adding partition 1
[osd1][WARNIN] BLKPG: Device or resource busy
[osd1][WARNIN] error adding partition 2
[osd1][INFO  ] checking OSD status...
[osd1][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host osd1 is now ready for osd use.
Error in sys.exitfunc:


------------------------------------------------------------------------------------------------

[root@master ceph]# ceph-deploy osd  activate osd1:/dev/vdc1 osd2:/dev/vdc1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy osd activate osd1:/dev/vdc1 osd2:/dev/vdc1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xcc4e18>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xcbab90>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [(‘osd1‘, ‘/dev/vdc1‘, None), (‘osd2‘, ‘/dev/vdc1‘, None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks osd1:/dev/vdc1: osd2:/dev/vdc1:
[osd1][DEBUG ] connected to host: osd1
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host osd1 disk /dev/vdc1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[osd1][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/vdc1
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/vdc1
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd1][WARNIN] DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.WDZv83 with options noatime,inode64
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.WDZv83
[osd1][WARNIN] DEBUG:ceph-disk:Cluster uuid is e2345a18-c1e1-4079-8dc6-25285231e09d
[osd1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd1][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[osd1][WARNIN] DEBUG:ceph-disk:OSD uuid is a203debe-0140-466b-9ee5-c71b476fd1ac
[osd1][WARNIN] DEBUG:ceph-disk:OSD id is 4
[osd1][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[osd1][WARNIN] DEBUG:ceph-disk:ceph osd.4 data dir is ready at /var/lib/ceph/tmp/mnt.WDZv83
[osd1][WARNIN] INFO:ceph-disk:ceph osd.4 already mounted in position; unmounting ours.
[osd1][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.WDZv83
[osd1][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.WDZv83
[osd1][WARNIN] DEBUG:ceph-disk:Starting ceph osd.4...
[osd1][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.4
[osd1][DEBUG ] === osd.4 ===
[osd1][DEBUG ] Starting Ceph osd.4 on osd1...already running
[osd1][INFO  ] checking OSD status...
[osd1][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[osd1][INFO  ] Running command: chkconfig ceph on
[osd2][DEBUG ] connected to host: osd2
[osd2][DEBUG ] detect platform information from remote host
[osd2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host osd2 disk /dev/vdc1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[osd2][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/vdc1
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/vdc1
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd2][WARNIN] DEBUG:ceph-disk:Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.dlgr0S with options noatime,inode64
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.dlgr0S
[osd2][WARNIN] DEBUG:ceph-disk:Cluster uuid is e2345a18-c1e1-4079-8dc6-25285231e09d
[osd2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[osd2][WARNIN] DEBUG:ceph-disk:OSD uuid is 95b35ffa-b954-4add-b775-0b81ed3eaee5
[osd2][WARNIN] DEBUG:ceph-disk:OSD id is 3
[osd2][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[osd2][WARNIN] DEBUG:ceph-disk:ceph osd.3 data dir is ready at /var/lib/ceph/tmp/mnt.dlgr0S
[osd2][WARNIN] INFO:ceph-disk:ceph osd.3 already mounted in position; unmounting ours.
[osd2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.dlgr0S
[osd2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.dlgr0S
[osd2][WARNIN] DEBUG:ceph-disk:Starting ceph osd.3...
[osd2][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.3
[osd2][DEBUG ] === osd.3 ===
[osd2][DEBUG ] Starting Ceph osd.3 on osd2...already running
[osd2][INFO  ] checking OSD status...
[osd2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[osd2][INFO  ] Running command: chkconfig ceph on
Error in sys.exitfunc:
[root@master ceph]#
------------------------------------------------------------------------------
ceph-deploy  admin master osd1 osd2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.26): /usr/bin/ceph-deploy admin master osd1 osd2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  prog                          : ceph-deploy
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xb26248>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : [‘master‘, ‘osd1‘, ‘osd2‘]
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x9d9b18>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to master
[master][DEBUG ] connected to host: master
[master][DEBUG ] detect platform information from remote host
[master][DEBUG ] detect machine type
[master][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to osd1
[osd1][DEBUG ] connected to host: osd1
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[osd1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to osd2
[osd2][DEBUG ] connected to host: osd2
[osd2][DEBUG ] detect platform information from remote host
[osd2][DEBUG ] detect machine type
[osd2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
Error in sys.exitfunc:


---------------------------------------------------------------------------------------------------------------------

本文出自 “刘福” 博客,请务必保留此出处http://liufu1103.blog.51cto.com/9120722/1693823

3.ceph安装包明细及deploy过程输出

标签:ceph依赖包

原文地址:http://liufu1103.blog.51cto.com/9120722/1693823

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!