标签:kvm
迁移有两种类型的迁移:最重要的好处可能就是增强了运行时间和减少了当机时间,第二个好处可能就是节约能源和更环保。第三个可能就是软件和硬件的升级更容易了。
首先在存储服务器上执行如下操作。$ sudo echo ‘/testvms *(rw,sync,no_root_squash)‘ >> /etc/export
$ sudo firewall-cmd --get-active-zones
FedoraServer
interfaces: eth0
$ sudo firewall-cmd --zone=FedoraServer --add-service=nfs
$ sudo firewall-cmd --zone=FedoraServer --list-al
$ sudo systemctl start rpcbind nfs-server
$ sudo systemctl enable rpcbind nfs-server
$ sudo showmount -e
$ sudo mount 192.168.122.1:/testvms /mnt
$ sudo umount /mnt
$ sudo mkdir -p /var/lib/libvirt/images/testvms/
$ sudo virsh pool-define-as \
--name testvms \
--type netfs \
--source-host 192.168.122.1 \
--source-path /testvms \
--target /var/lib/libvirt/images/testvms/
$ sudo virsh pool-start testvms
$ sudo virsh pool-autostart testvm
以上命令是创建了存储沲,并开启了存储沲的使用。
迁移的时候建议使用隔离的专用网络,不建议和别的网络混用。
f22-01 -- eth0 (192.168.0.5) ←<--switch------> eth0 (192.168.0.6) -- f22-02
eth1 -> br1 <-----switch------> eth1 -> br1
在这个迁移案例中,libvirt只是复制VM‘S XML配置文件,从源到目的。首先需要关闭SELINUX。
然后让两个KVM之间通过证书认证,连接时不再需要用户名和密码。
$ sudo ssh-keygen
$ sudo ssh-copy-id root@localkvm-1.xiodi.cn
$ sudo ssh-keygen
$ sudo ssh-copy-id root@localkvm-2.xiodi.cn
virsh migrate migration-type options name-of-the-vm destination-uri
[root@localkvm-1 ~]# virsh migrate --offline --verbose --persistent centos110 qemu+ssh://localkvm-2.xiodi.cn/system --unsafe
Migration: [100 %]
然后在localkvm-2上面查看虚拟机,并尝试启动验证效果。
[root@localkvm-2 mnt]# virsh list --all
Id Name State
----------------------------------------------------
- centos110 shut off
[root@localkvm-2 mnt]# virsh start centos110
Domain centos110 started
[root@localkvm-2 mnt]# virsh list
Id Name State
----------------------------------------------------
1 centos110 running
针对libvirt的环境中,建议使用lockd而不是sanlock。sanlock最好在oVirt中使用。
[root@localkvm-2 mnt]# systemctl enable virtlockd;systemctl start virtlockd
[root@localkvm-2 mnt]# systemctl restart libvirtd
[root@localkvm-2 mnt]# systemctl status virtlockd
● virtlockd.service - Virtual machine lock manager
Loaded: loaded (/usr/lib/systemd/system/virtlockd.service; indirect; vendor preset: disabled)
Active: active (running) since Mon 2018-04-30 10:02:26 CST; 14s ago
Docs: man:virtlockd(8)
http://libvirt.org
Main PID: 2770 (virtlockd)
CGroup: /system.slice/virtlockd.service
└─2770 /usr/sbin/virtlockd
Apr 30 10:02:26 localkvm-2.xiodi.cn systemd[1]: Started Virtual machine lock manager.
Apr 30 10:02:26 localkvm-2.xiodi.cn systemd[1]: Starting Virtual machine lock manager...
其它开启lockd方法是磁盘文件路径的hash. 通过NFS的导出,locks存储在一个共享目录中。当使用多路径的LUN时,这种方法比较有用。fcntl()在这样的案例中不能使用。推荐以下开启锁机制。
$ sudo echo /flockd *(rw,no_root_squash) >> /etc/exports
$ sudo service nfs reload
$sudo showmount -e
Export list for :
/flockd *
/testvms *
$ sudo echo "192.168.122.1:/flockd /flockd nfs rsize=8192,wsize=8192,timeo=14,intr,sync" >> /etc/fstab
$ sudo mkdir -p /var/lib/libvirt/lockd/flockd
$ sudo mount -a
$ sudo echo ‘file_lockspace_dir = "/var/lib/libvirt/lockd/flockd"‘ >> /etc/libvirt/qemu-lockd.conf
$ sudo reboot both hypervisors
[root@f22-01 ~]# virsh start vm1
Domain vm1 started
[root@f22-02 flockd]# ls
36b8377a5b0cc272a5b4e50929623191c027543c4facb1c6f3c35bacaa7455ef 51e3ed692fdf92ad54c6f234f742bb00d4787912a8a674fb5550b1b826343dd6
[root@f22-02 ~]# virsh start vm1
error: Failed to start domain vm1
error: resource busy: Lockspace resource ‘51e3ed692fdf92ad54c6f234f742bb00d4787912a8a674fb5550b1b826343dd6‘ is locked
When using LVM volumes that can be visible across multiple host systems, it is desirable to do the locking based on the unique UUID associated with each volume, instead of their paths. Setting this path causes libvirt to do UUID based locking for LVM.
lvm_lockspace_dir = "/var/lib/libvirt/lockd/lvmvolumes"
When using SCSI volumes that can be visible across multiple host systems, it is desirable to do locking based on the unique UUID associated with each volume, instead of their paths. Setting this path causes libvirt to do UUID-based locking for SCSI.
scsi_lockspace_dir = "/var/lib/libvirt/lockd/scsivolumes"
Like file_lockspace_dir, the preceding directories should also be shared with the hypervisors.
$ sudo firewall-cmd --zone=public --add-port=49152-49216/tcp --permanent
$ sudo virsh migrate --live vm1 qemu+ssh://f22-02.example.local/system --verbose --persistent
Migration: [100 %]
$ sudo virsh migrate --live vm1 qemu+ssh://f22-02.example.local/system --verbose
error: Unsafe migration: Migration may lead to data corruption if disks use cache != none
$ sudo virt-xml vm1 --edit --disk target=vda,cache=none
在执行迁移时,可以考虑使用以下几个选项,在这里简单的介绍 一下。
root@f22-02 ~]# ls /var/lib/libvirt/images/testvm.qcow2
ls: cannot access /var/lib/libvirt/images/testvm.qcow2: No such file or directory
[root@f22-01 ~]# virsh migrate --live --persistent --verbose --copy-storage-all testvm qemu+ssh://f22-02.example.local/system
Migration: [100 %]
[root@f22-02 ~]# ls /var/lib/libvirt/images/testvm.qcow2
/var/lib/libvirt/images/testvm.qcow2
--copy-storage-inc will only transfer the changes:
[root@f22-01 ~]# virsh migrate --live --verbose --copy-storage-inc testvm qemu+ssh://f22-02.example.local/system
Migration: [100 %]
标签:kvm
原文地址:http://blog.51cto.com/aishangwei/2124556