标签:des blog http os io for art cti
https://www.youtube.com/watch?v=UXcZ2bnnGZg http://www.youtube.com/watch?v=BBOBHMvKfyc&feature=g-high
This issue is more or less fixed in Cuttlefish+
Make sure that the min replica count is set to nodes-1.
ceph osd pool set <poolname> min_size 1
Then the remaing node[s] will start up with just 1 node if everything else is down.
Keep in mind this can potentially make stuff ugly as there are no replicas now.
More info here: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/10481
Use ceph-deploy tools from cuttlefish+ instead for all these sorts of things unless you know what you‘re doing (http://ceph.com/docs/master/rados/deployment/ )
Prepare the disk as usual (partition or entire disk) - format with filesystem of choosing. Add to fstab and mount. Add to /etc/ceph/ceph.conf and replicate the new conf to the other nodes.
Start the disk, I‘m assuming we‘ve added osd.12 (sdd) on ceph1 here.
## Prepare disk first, create partition and format it <insert parted oneliner> mkfs.xfs -f /dev/sdd1 ## Create the disk ceph osd create [uuid] ## Auth stuff to make sure that the OSD is accepted into the cluser: mkdir /srv/ceph/[uuid_from_above] ceph-osd -i 12 --mkfs --mkkey ceph auth add osd.12 osd ‘allow *‘ mon ‘allow rwx‘ -i /etc/ceph/keyring.osd.12 ## Start it /etc/init.d/ceph start osd.12 ## Add it to the cluster and allow replicated based on CRUSH map. ceph osd crush set 12 osd.12 1.0 pool=default rack=unknownrack host=ceph1-osd
In the line above, if you exchange the pool/rack/host you can place your disk/node where you want.
If you add a new host entry, it will be the same as adding a new node (with the disk).
Check that is in the right place with:
ceph osd tree
More info here:
Use ceph-deploy tools from cuttlefish+ instead for all these sorts of things unless you know what you‘re doing (http://ceph.com/docs/master/rados/deployment/ )
Make sure you have the right disk, run
ceph osd tree
to get an overview.
Here we‘ll delete osd5
## Mark it out ceph osd out 5 ## Wait for data migration to complete (ceph -w), then stop it service ceph -a stop osd.5 ## Now it is marked out and down
## If deleting from active stack, be sure to follow the above to mark it out and down ceph osd crush remove osd.5 ## Remove auth for disk ceph auth del osd.5 ## Remove disk ceph osd rm 5 ## Remove from ceph.conf and copy new conf to all hosts
Use ceph-deploy tools from cuttlefish+ instead for all these sorts of things unless you know what you‘re doing (http://ceph.com/docs/master/rados/deployment/ )
Install ceph, add keys, ceph.conf, host files and prepare a storage for containing the maps.
Then add the monitor into the system (To keep quorum, keep either 1 or 3+ - not 2). Examples are adding monitor mon.21 with ip 192.168.0.68
cd /tmp; mkdir add_monitor; cd add_monitor ceph auth get mon. -o key > exported keyring for mon. ceph mon getmap -o map > got latest monmap ceph-mon -i 21 --mkfs --monmap map --keyring key > ceph-mon: created monfs at /srv/ceph/mon21 for mon.21 ceph mon add 21 192.168.0.68 > port defaulted to 6789added mon.21 at 192.168.0.68:6789/0 /etc/init.d/ceph start mon.21
Add the info to the ceph.conf file:
[mon] ... [mon.21] host = ceph2-mon mon addr = 192.168.0.68:6789 ...
More info here: http://jcftang.github.com/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/
ceph osd dump
## save current crushmap in binary ceph osd getcrushmap -o crush.running.map ## Convert to txt crushtool -d crush.running.map -o crush.map ## Edit it and re-convert to binary crushtool -c crush.map -o crush.new.map ## Inject into running system ceph osd setcrushmap -i crush.new.map ## If you‘ve added a new ruleset and want to use that for a pool, do something like: ceph osd pool default crush rule = 4
Pr host:
<disk type=‘network‘ device=‘disk‘> <driver name=‘qemu‘ type=‘raw‘/> <source protocol="rbd" name="test/disk2-qemu-5g:rbd_cache=1"> <host name=‘192.168.0.67‘ port=‘6789‘/> <host name=‘192.168.0.68‘ port=‘6789‘/> </source> <auth username=‘admin‘ type=‘ceph‘> <secret type=‘ceph‘ uuid=‘7a91dc24-b072-43c4-98fb-4b2415322b0f‘/> </auth> <target dev=‘vdb‘ bus=‘virtio‘/> <address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x04‘ function=‘0x0‘/> </disk>
Pr pool:
<pool type=‘rbd‘> <name>rbd</name> <uuid>f959641f-f518-4505-9e85-17d994e2a399</uuid> <source> <host name=‘192.168.0.67‘ port=‘6789‘/> <host name=‘192.168.0.68‘ port=‘6789‘/> <host name=‘192.168.0.69‘ port=‘6789‘/> <name>test</name> <auth username=‘admin‘ type=‘ceph‘> <secret type=‘ceph‘ uuid=‘7a91dc24-b072-43c4-98fb-4b2415322b0f‘/> </auth> </source> </pool>
If you‘re using any form of ceph auth, this needs to be added - else skip this part
Create a secret.xml file:
<secret ephemeral=‘no‘ private=‘no‘> <uuid>7a91dc24-b072-43c4-98fb-4b2415322b0f</uuid> <usage type=‘ceph‘> <name>admin</name> </usage> </secret>
Use it:
virsh secret-define secret.xml virsh secret-set-value 7a91dc24-b072-43c4-98fb-4b2415322b0f AQDAD8JQOLS9IxAAbox00eOmlM1h5ZLGPxHGHw==
The last key is the key from your /etc/ceph/keyring.admin
cat /etc/ceph/keyring.admin [client.admin] key = AQDAD8JQOLS9IxAAbox00eOmlM1h5ZLGPxHGHw==
Resize the desired block-image (here going from 30GB -> 40GB)
qemu-img resize -f rbd rbd:sata/disk3 40G > Image resized.
Find the attached target device:
virsh domblklist rbd-test > Target Source > ------------------------------------------------ > vdb sata/disk2-qemu-5g:rbd_cache=1 > vdc sata/disk3:rbd_cache=1 > hdc -
Then use virsh to tell the guest that the disk has a new size:
virsh blockresize --domain rbd-test --path "vdc" --size 40G > Block device ‘vdc‘ is resized
Check raw rbd info
rbd --pool sata info disk3 > rbd image ‘disk3‘: > size 40960 MB in 10240 objects > order 22 (4096 KB objects) > block_name_prefix: rb.0.13fb.23353e97 > parent: (pool -1)
Make sure you can see the change from dmesg (Guest should see the new size change).
dmesg > [...] > [75830.538557] vdb: detected capacity change from 118111600640 to 123480309760 > [...]
Then extend the partition - if it is a simple data volume, you can just fdisk, remove the old partition, create a new and access default values for start/end (Note, only applies to partitions which holds nothing else!)
Write the partition, fdisk -l to doublecheck the size, then remount the partition (the partition from above is mounted as a data dir under vdb1 in my case):
mount -o remount,rw /dev/vdb1
Check your fstab to make sure you get the correct options for the remount.
Afterwards, call the resize2fs:
resize2fs /dev/vdb1 > resize2fs 1.42.5 (29-Jul-2012) > Filesystem at /dev/vdb1 is mounted on /home/mirroruser/mirror; on-line resizing required > old_desc_blocks = 7, new_desc_blocks = 7 > The filesystem on /dev/vdb1 is now 28835584 blocks long.
doublecheck via df -h or the like.
xfs_growfs /share data blocks changed from 118111600640 to 123480309760
All done
Before starting this, it will be a good idea to make sure the cluster does not automark OSD‘s out if they‘ve been down for default 300s. You can do that by issuing:
I‘ll assume we want to update node#1 having OSD 0,1,2 to put a journal on SSD with 10GB. It currently reside on each OSD with a 512MB journal Assuming we‘ll be mounting a SSD here: /srv/ceph/journal -- This will then hold all journals as /srv/ceph/journal/osd$id/journal A much better way is to create the above as partitions and not files and use those instead - I‘ll show inline: # Relevant ceph.conf options -- existing setup -- [osd] osd data = /srv/ceph/osd$id osd journal = /srv/ceph/osd$id/journal osd journal size = 512 # stop the OSD: /etc/init.d/ceph osd.0 stop /etc/init.d/ceph osd.1 stop /etc/init.d/ceph osd.2 stop # Flush the journal: ceph-osd -i 0 --flush-journal ceph-osd -i 1 --flush-journal ceph-osd -i 2 --flush-journal # Now update ceph.conf - this is very important or you‘ll just recreate journal on the same disk again -- change to [filebased journal] -- [osd] osd data = /srv/ceph/osd$id osd journal = /srv/ceph/journal/osd$id/journal osd journal size = 10000 -- change to [partitionbased journal (journal in this case would be on /dev/sda2)] -- [osd] osd data = /srv/ceph/osd$id osd journal = /dev/sda2 osd journal size = 0 # Create new journal on each disk ceph-osd -i 0 --mkjournal ceph-osd -i 1 --mkjournal ceph-osd -i 2 --mkjournal # Done, now start all OSD again /etc/init.d/ceph osd.0 start /etc/init.d/ceph osd.1 start /etc/init.d/ceph osd.2 start
If you set your cluster to not mark OSD‘s down, remember to remove it!
Enjoy your new faster journal!
More info here (source of the above):
This could be the case while adding new OSD/nodes and some osd keeps being marked down
ceph osd set nodown
Check it is set by issuing:
ceph osd dump | grep flags > flags no-down
When done, remember to unset it or no OSD will ever get marked down!
ceph osd unset nodown
Doublecheck with ceph osd dump | grep flags
A few things have helped make my cluster more stable:
I‘ve found the existing 30sec rule is sometimes too little for my 4disk lowend system.
Set the timeout accordingly (try and run debug first to determine if this is really the case).
osd op thread timeout = 60
Default value is 300, which is a bit high for highly stressed system/lowcpu
It is adjustable inside the [osd] column:
[osd] ... ## Added new transactional size as default value of 300 is too much osd target transaction size = 50 ...
Since my ‘cluster‘ is just 1 machine with 6 osd having multiple recoveries running in parallel will effectively kill it; so I‘ve adjusted it down from the default values:
## Just 2 osd backilling kills a server osd max backfills = 1 osd recovery max active = 1
More info on all the various settings here:
Adjust nr_request in queue (staying in mem - default is 128)
echo 1024 > /sys/block/sdb/queue/nr_requests
Change scheduler as pr. Inktank blogpost:
Alternatively, if you want to change the default values (0.85 as near_full, 0.95 as full), you can add this to the mon section of your ceph.conf:
[mon] ... mon osd nearfull ratio = 0.92 mon osd full ratio = 0.98 ...
Will cause the cluster to halt because of a new feature. Explained by Greg Farnum from Ceph mailinglist:
This took me a bit to work out as well, but you‘ve run afoul of a new post-argonaut feature intended to prevent people from writing with insufficient durability. Pools now have a "min size" and PGs in that pool won‘t go active if they don‘t have that many OSDs to write on. The clue here is the "incomplete" state. You can change it with "ceph osd pool foo set min_size 1", where "foo" is the name of the pool whose min_size you wish to change (and this command sets the min size to 1, obviously). The default for new pools is controlled by the "osd pool default min size" config value (which you should put in the global section). By default it‘ll be half of your default pool size. So in your case your pools have a default size of 3, and the min size is (3/2 = 1.5 rounded up), and the OSDs are refusing to go active because of the dramatically reduced redundancy. You can set the min size down though and they will go active.
Do give thought about which version of rbd to use! Format 2 gives additional [great] benefits over format 1
I‘ve succesfully snapshotted lots of running systems, but the catch is all disk io not still flushed will not make it. You can use something like fsfreeze to first freeze the guest filesystem (ext4, btrfs, xfs). It will basically flush the FS state to disk and blocking any future write access while maintaining Read accesses.
Turned out to be really simple
I started by grabbing a snapshot of a running lvm
lvcreate -L25G -s -n snap-webserver /dev/storage/webserver
And then just feeding that snapshot directly into rbd:
rbd import /dev/storage/snap-webserver sata/webserver ## Format 2: rbd --format 2 import /dev/storage/snap-webserver sata/webserver ## Format 2 for cuttlefish+: rbd --image-format 2 import /dev/storage/snap-webserver sata/webserver
Here sata/webserver means pool sata. The webserver image will be automatically created.
## Using format 2 rbd images dd if=/dev/e0.0/snap-webhotel | pv | ssh root@remote-server ‘rbd --image-format 2 import - sata/webserver‘
Will run and show total copied data as well as speed
Is pretty much just the reverse:
## Optional; create a snapshot first rbd snap create sata/webserver@snap1 ## Transfer image rbd export sata/webserver@snap1 - | pv | ssh root@remote-server ‘dd of=/dev/lvm-storage/webserver‘
Prob. lots of ways to do it; I did it the usual way with import/export
ssh root@remote-server ‘rbd export sata/webserver -‘ | pv | rbd --image-format 2 import - sata/webserver
In both cases using ‘-‘ is using stdin.
It seems to be important to create pools which are big enough in has the number in ^ 2.
So 2048, 4096 etc.
ceph osd pool create sata 4096
It is recommended to use a 3.6+ kernel for this!
/sbin/modprobe rbd /usr/bin/rbd map --pool sata webserver-data --id admin --keyring /etc/ceph/keyring.admin
Then it will show up as device /etc/rbd[X]
Then you can format it, partition it or do what you want - and then mount it like a normal device.
If you want to remove a mapping for a device, issue:
## First unmount from system, then: rbd unmap /dev/rbd[X]
Source: http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#stopping-w-out-rebalancing
## Set cluster to now mark OSD down ceph osd set noout ## Once the cluster is set to nodown, you can begin stopping the OSDs within the failure domain that requires maintenance work. ceph osd stop osd.{num} ## Note Placement groups within the OSDs you stop will become degraded while you are addressing issues with within the failure domain. ## Once you have completed your maintenance, restart the OSDs. ceph osd start osd.{num} ## Finally, you must unset the cluster from nodown. ceph osd unset noout
Using rbd
Creating a 50GB share in sata named rpmbuilder-ceph could be done like:
rbd create sata/rpmbuilder-ceph [--format 2*] --size 51200
also qemu-img can create rbd images; I fancy it works the same, but am not entirely sure.
\* This will default create format 1 rbd blockdevices; format 2 has to be specified and brings a lot of new utility handling images (flattening, cloning etc). Use --format 2 to create as format 2 images.
Using KVM
qemu-img create -f rbd rbd:sata/rpmbuilder-ceph 50G
Note: Qemu 1.4.2+ has this patch incorporated, so if you‘re using that or anything newer, you‘re all set.
Is a known problem caused by qemu flushing synchronous ( http://git.qemu.org/?p=qemu.git;a=commit;h=dc7588c1eb3008bda53dde1d6b890cd299758155 ).
There is a patch ready here: http://git.qemu.org/?p=qemu.git;a=commitdiff;h=dc7588c1eb3008bda53dde1d6b890cd299758155
More on this from this thread: http://www.spinics.net/lists/ceph-users/msg01352.html
I suggest using a normal user to build
Install prereqs:
## Get and install deps sudo apt-get install build-essential checkinstall ## I‘m not sure if these are even needed sudo apt-get install git-core mercurial cd mkdir build cd build sudo apt-get build-dep qemu-kvm ## Grab source and patch it cd qemu-1.4.0+dfsg/debian/patches/ wget "http://git.qemu.org/?p=qemu.git;a=patch;h=dc7588c1eb3008bda53dde1d6b890cd299758155" -o 0020-async-rbd.patch patch < 0020-async-rbd.patch ## Build it dpkg-buildpackage -rfakeroot -b
It will take some time to build. Afterwards you‘ll have a bunch of .deb packages in the builds dir.
Just install the qemu-kvm_1.4.0+dfsg-1expubuntu4_amd64.deb and you should be good to try out the new async patch.
If you don‘t want the hassle of compiling yourself and you trust complete strangers with .deb package you can download the one I made here (only amd-64 bit - and only 1.4.0 stock ubuntu 13.04).
http://utils.skarta.net/qemu-kvm/qemu-kvm_1.4.0+dfsg-1expubuntu4_amd64.deb
In Cuttlefish and later the repair procedure is much more intelligent and should be able to safely find the correct pg - so it should be safe to use instead
This is copied from ceph-users mailingslist
First part of the mail:
Some scrub errors showed up on our cluster last week. We had some issues with host stability a couple weeks ago; my guess is that errors were introduced at that point and a recent background scrub detected them. I was able to clear most of them via "ceph pg repair", but several remain. Based on some other posts, I‘m guessing that they won‘t repair because it is the primary copy that has the error. All of our pools are set to size 3 so there _ought_ to be a way to verify and restore the correct data, right? Below is some log output about one of the problem PG‘s. Can anyone suggest a way to fix the inconsistencies? 2013-05-20 10:07:54.529582 osd.13 10.20.192.111:6818/20919 3451 : [ERR] 19.1b osd.13: soid 507ada1b/rb.0.6989.2ae8944a.00000000005b/5//19 digest 4289025870 != known digest 4190506501 2013-05-20 10:07:54.529585 osd.13 10.20.192.111:6818/20919 3452 : [ERR] 19.1b osd.22: soid 507ada1b/rb.0.6989.2ae8944a.00000000005b/5//19 digest 4289025870 != known digest 4190506501 2013-05-20 10:07:54.606034 osd.13 10.20.192.111:6818/20919 3453 : [ERR] 19.1b repair 0 missing, 1 inconsistent objects 2013-05-20 10:07:54.606066 osd.13 10.20.192.111:6818/20919 3454 : [ERR] 19.1b repair 2 errors, 2 fixed 2013-05-20 10:07:55.034221 osd.13 10.20.192.111:6818/20919 3455 : [ERR] 19.1b osd.13: soid 507ada1b/rb.0.6989.2ae8944a.00000000005b/5//19 digest 4289025870 != known digest 4190506501 2013-05-20 10:07:55.034224 osd.13 10.20.192.111:6818/20919 3456 : [ERR] 19.1b osd.22: soid 507ada1b/rb.0.6989.2ae8944a.00000000005b/5//19 digest 4289025870 != known digest 4190506501 2013-05-20 10:07:55.113230 osd.13 10.20.192.111:6818/20919 3457 : [ERR] 19.1b deep-scrub 0 missing, 1 inconsistent objects 2013-05-20 10:07:55.113235 osd.13 10.20.192.111:6818/20919 3458 : [ERR] 19.1b deep-scrub 2 errors
Actual solution proposed from an inktank employeer:
You need to find out where the third copy is. Corrupt it. Then let repair copy the data from a good copy. $ ceph pg map 19.1b You should see something like this: osdmap e158 pg 19.1b (19.1b) -> up [13, 22, xx] acting [13, 22, xx] The osd xx that is NOT 13 or 22 has the corrupted copy. Connect to the node that has that osd. Find in the mount for osd xx your object with name "rb.0.6989.2ae8944a.00000000005b" $ find /var/lib/ceph/osd/ceph-xx -name ‘rb.0.6989.2ae8944a.00000000005b*‘ -ls 201326612 4 -rw-r--r-- 1 root root 255 May 22 14:11 /var/lib/ceph/osd/ceph-xx/current/19.1b_head/rb.0.6989.2ae8944a.00000000005b__head_XXXXXXXX__0 I would stop osd xx, first. In this case we find the file is 255 bytes long. In order to make sure this bad copy isn‘t used. Let‘s make the file 1 byte longer. $ truncate -s 256 /var/lib/ceph/osd/ceph-xx/current/19.1b_head/rb.0.6989.2ae8944a.00000000005b__head_XXXXXXXX__0 Restart osd xx. Not sure how what command does that on your platform. Verify that OSDs are all running. Shows all osds are up and in. $ ceph -s | grep osdmap osdmap e6: 6 osds: 6 up, 6 in $ ceph osd repair 19.1b instructing pg 19.1b on osd.13 to repair
As read speeds go directly to the storage behind ssd or whatever you have it will be as slow as reading from all your distributed devices.
You should be able to push that performance a bit with changing
echo 4096 > /sys/block/vda/queue/read_ahead_kb
Ceph - howto, rbd, lvm, cluster,布布扣,bubuko.com
Ceph - howto, rbd, lvm, cluster
标签:des blog http os io for art cti
原文地址:http://my.oschina.net/renguijiayi/blog/296390