标签:
http://superuser.openstack.org/articles/how-to-migrate-from-vmware-and-hyper-v-to-openstack
I migrated >120 VMware virtual machines (Linux and Windows) from VMware ESXi to OpenStack. In a lab environment I also migrated from Hyper-V with these steps. Unfortunately, I am not allowed to publish the script files I used for this migration, but I can publish the steps and commands that I used to migrate the virtual machines. With the steps and commands, it should be easy to create scripts that do the migration automatically.
Just to make it clear, these steps do not convert traditional (non-cloud) applications to cloud-ready applications. In this case we started to use OpenStack as a traditional hypervisor infrastructure.
Update: The newer versions of libguestfs-tools and qemu-img convert handle VMDK files very well (I had some issues with older versions of the tools), so the migration can be more efficient. I removed the conversion steps from VMDK to VMDK (single file) and from VMDK to RAW. The migration speed will be doubled by reducing these steps. Disclaimer: This information is provided as-is. I will decline any responsibility caused by or with these steps and/or commands. I suggest you don’t try and/or test these commands in a production environment. Some commands are very powerful and can destroy configurations and data in Ceph and OpenStack. So always use this information with care and responsibly.
Here are the specifications of the infrastructure I used for the migration:
A Linux ‘migration node’ (tested with Ubuntu 14.04/15.04, RHEL6, Fedora 19-21) with:
We used a server with 8x Intel Xeon E3-1230 @ 3.3GHz, 32GB RAM, 8x 1TB SSD and we managed to migrate >500GB per hour. However, it really depends on the usage of the disk space of the instances. But also my old company laptop (Core i5 and 4GB of RAM and an old 4500rmp HDD) worked, but obviously the performance was very poor.
“python-cinderclient” (to control volumes)
“python-keystoneclient” (for authentication to OpenStack)
“python-novaclient” (to control instances)
“python-neutronclient” (to control networks)
“python-httplib2” (to be able to communicate with web service)
“libguestfs-tools” (to access the disk files)
“libguestfs-winsupport” (should be separately installed on RHEL based systems only)
“libvirt-client” (to control KVM)
“qemu-img” (to convert disk files)
“ceph” (to import virtual disk into Ceph)
“vmware-vdiskmanager” (to expand VMDK disks, downloadable from VMware)
Since Windows Server 2012 and Windows 8.0, the driver store is protected by Windows. It is very hard to inject drivers in an offline Windows disk. Windows Server 2012 does not boot from VirtIO hardware by default. So, I took these next steps to install the VirtIO drivers into Windows. Note that these steps should work for all tested Windows versions (2003/2008/2012).
Linux kernels 2.6.25 and above have already built-in support for VirtIO hardware. So there is no need to inject VirtIO drivers. Create and start a new KVM virtual machine with VirtIO hardware. When LVM partitions do not mount automatically, run this to fix:
(log in)
mount -o remount,rw /
pvscan
vgscan
reboot
(after the reboot all LVM partitions should be mounted and Linux should boot fine)
Shut down the virtual machine when done.
Some Linux distributions provide VirtIO modules for older kernel versions. Some examples:
The steps for older kernels are:
For Red Hat, see: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/ch10s04.html
For SuSe, see:https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_virtio_install.htm
For Windows versions prior to 2012 you could also use these steps to insert the drivers (the steps in 4.1 should also work for Windows 2003/2008).
Registry file (I called the file mergeviostor.reg, as it holds the VirtIO storage information only):
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00000000]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00020000]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1001&subsys_00021AF4&rev_00]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\CriticalDeviceDatabase\pci#ven_1af4&dev_1004&subsys_00081af&rev_00]
"ClassGUID"="{4D36E97B-E325-11CE-BFC1-08002BE10318}"
"Service"="viostor"
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\viostor]
"ErrorControl"=dword:00000001
"Group"="SCSI miniport"
"Start"=dword:00000000
"Tag"=dword:00000021
"Type"=dword:00000001
"ImagePath"="system32\\drivers\\viostor.sys"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion]
"DevicePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,69,00,6e,00,66,00,3b,00,63,00,3a,00,5c,00,44,00,72,00,69,00,76,00,65,00,72,00,73,00,00,00
When these steps have been executed, Windows should boot from VirtIO disks without BSOD. Also all other drivers (network, balloon etc.) should install automatically when Windows boots.
See: https://support.microsoft.com/en-us/kb/314082 (written for Windows XP, but it is still usable for Windows 2003 and 2008).
See also: http://libguestfs.org/virt-copy-in.1.html and http://libguestfs.org/virt-win-reg.1.html
Some Windows servers I migrated had limited free disk space on the Windows partition. There was not enough space to install new management applications. So, I used the vmware-vdiskmanager tool with the ‘-x’ argument (available from VMware.com) to increase the disk size. You then still need to expand the partition from the operating system. You can do that while customizing the virtual machine in the next step.
To prepare the operating system to run in OpenStack, you probably would like to uninstall some software (like VMware Tools and drivers), change passwords and install new management tooling etc.. You can automate this by writing a script that does this for you (those scripts are beyond the scope of this article). You should be able to inject the script and files with the virt-copy-in command into the virtual disk.
I started the scripts within Linux manually as I only had a few Linux servers to migrate. I guess Linux engineers should be able to completely automate this.
I choose the RunOnce method to start scripts at Windows boot as it works on all versions of Windows that I had to migrate. You can put a script in the RunOnce by injecting a registry file. RunOnce scripts are only run when a user has logged in. So, you should also inject a Windows administrator UserName, Password and set AutoAdminLogon to ‘1’. When Windows starts, it will automatically log in as the defined user. Make sure to shut down the virtual machine when done.
Example registry file to auto login into Windows (with user ‘Administrator’ and password ‘Password’) and start the C:\StartupWinScript.vbs.:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce]
"Script"="cscript C:\\StartupWinScript.vbs"
"Parameters"=""
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon]
"AutoAdminLogon"="1"
"UserName"="Administrator"
"Password"="Password"
For every disk you want to import, you need to create a Cinder volume. The volume size in the Cinder command does not really matter, as we remove (and recreate with the import) the Ceph device in the next step. We create the cinder volume only to create the link between Cinder and Ceph.
Nevertheless, you should keep the volume size the same as the disk you are planning to import. This is useful for the overview in the OpenStack dashboard (Horizon).
You create a cinder volume with the following command (the size is in GB, you can check the available volume types by cinder type-list):
cinder create --display-name <name_of_disk> <size> --volume-type <volumetype>
Note the volume id (you can also find the volume id with the following command) as we need the ids in the next step.
cinder list | grep <name_of_disk>
Cinder command information: http://docs.openstack.org/cli-reference/content/cinderclient_commands.html
As soon as the Cinder volumes are created, we can convert the VMDK disk files to RBD blocks (Ceph). But first we need to remove the actual Ceph disk. Make sure you remove the correct Ceph block device!
In the first place you should know in which Ceph pool the disk resides. Then remove the volume from Ceph (the volume-id is the volume id that you noted in the previous step ‘Create Cinder volumes’):
rbd -p <ceph_pool> rm volume-<volume-id>
Next step is to convert the VMDK file into the volume on Ceph (all ceph* arguments will result in better performance. The vmdk_disk_file variable is the complete path to the vmdk file. The volume-id is the ID that you noted before).
qemu-img convert -p <vmdk_disk_file> -O rbd rbd:<ceph_pool>/volume-<volume-id>
Do this for all virtual disks of the virtual machine.
Be careful! The rbd command is VERY powerful (you could destroy more data on Ceph than intended)!
In some cases you might want to set a fixed IP-address or a MAC-address. You can do that by create a port with neutron and use that port in the next step (create and boot instance in OpenStack).
You should first know what the network_name is (nova net-list), you need the ‘Label’. Only the network_name is mandatory. You could also add security groups by adding
--security-group <security_group_name>
Add this parameter for each security group, so if you want to add i.e. 6 security-groups, you should add this parameter 6 times.
neutron port-create --fixed-ip ip_address=<ip_address> --mac-address <mac_address> <network_name> --name <port_name>
Note the id of the neutron port, you will need it in the next step.
Now we have everything prepared to create an instance from the Cinder volumes and an optional neutron port.
Note the volume-id of the boot disk.
Now you only need to know the id of the flavor you want to choose. Run nova flavor-list to get the flavor-id of the desired flavor.
Now you can create and boot the new instance:
nova boot <instance_name> --flavor <flavor_id> --boot-volume <boot_volume_id> --nic port-id=<neutron_port_id>
Note the Instance ID. Now, add each other disk of the instance by executing this command (if there are other volumes you want to add):
nova volume-attach <instance_ID> <volume_id>
This post first appeared on Nathan Portegijs‘ blog. Superuser is always interested in how-tos and other contributions, please get in touch: editor@superuser.org
Cover Photo by Clement127 // CC BY NC
How to migrate from VMware and Hyper-V to OpenStack
标签:
原文地址:http://www.cnblogs.com/allcloud/p/5420807.html