码迷,mamicode.com
首页 > 其他好文 > 详细

【转】理解qemu对设备的模拟机制

时间:2019-06-21 12:24:38      阅读:147      评论:0      收藏:0      [点我收藏+]

标签:inter   访问   ini   通过   orm   diff   optimize   实现   程序   

Understanding QEMU devices

https://www.qemu.org/2018/02/09/understanding-qemu-devices/

July, 2017

Here are some notes that may help newcomers understand what is actually happening with QEMU devices:

With QEMU, one thing to remember is that we are trying to emulate what an Operating System (OS) would see on bare-metal hardware. Most bare-metal machines are basically giant memory maps, where software poking at a particular address will have a particular side effect (the most common side effect is, of course, accessing memory; but other common regions in memory include the register banks for controlling particular pieces of hardware, like the hard drive or a network card, or even the CPU itself). The end-goal of emulation is to allow a user-space program, using only normal memory accesses, to manage all of the side-effects that a guest OS is expecting.

【计算机的本质】
大多数裸机都基本上是巨型存储器映射,当软件在特定地址处的“挑拨”时,会产生特定的副作用。
最常见的是访问内存,但在内存区域中也会包括用于控制特定硬件的寄存器组,比如硬盘、网卡、甚至是CPU本身等。

As an implementation detail, some hardware, like x86, actually has two memory spaces, where I/O space uses different assembly codes than normal; QEMU has to emulate these alternative accesses. Similarly, many modern CPUs provide themselves a bank of CPU-local registers within the memory map, such as for an interrupt controller.

【两个内存空间】
X86的中有两个内存空间,其中的IO空间使用不同于normal的汇编代码。
Qemu必须模拟这些不同的访问情况。
同样地,许多现代CPU在内存映射中为自己提供了一组CPU本地寄存器,例如用于中断控制器。

With certain hardware, we have virtualization hooks where the CPU itself makes it easy to trap on just the problematic assembly instructions (those that access I/O space or CPU internal registers, and therefore require side effects different than a normal memory access), so that the guest just executes the same assembly sequence as on bare metal, but that execution then causes a trap to let user-space QEMU then react to the instructions using just its normal user-space memory accesses before returning control to the guest. This is supported in QEMU through “accelerators”.

【cpu硬件对虚拟化的支持】
某些硬件有虚拟化的hook,
CPU可以容易地捕获“有问题”的汇编指令
(访问I / O空间或CPU内部寄存器的指令,这些指令期望的是不同于正常内存访问的副作用),
所以 guest虚拟机只需执行与裸机相同的汇编序列即可。
但这会导致“陷入”,QEMU使用其正常的用户空间内存访问对指令作出反应,
并将控制权再返回给guest虚拟机。此功能通过“加速器”实现。

Virtualizing accelerators, such as KVM, can let a guest run nearly as fast as bare metal, where the slowdowns are caused by each trap from guest back to QEMU (a vmexit) to handle a difficult assembly instruction or memory address. QEMU also supports other virtualizing accelerators (such as HAXMor macOS’s Hypervisor.framework).

QEMU also has a TCG accelerator, which takes the guest assembly instructions and compiles it on the fly into comparable host instructions or calls to host helper routines; while not as fast as hardware acceleration, it allows cross-hardware emulation, such as running ARM code on x86.

The next thing to realize is what is happening when an OS is accessing various hardware resources. For example, most operating systems ship with a driver that knows how to manage an IDE disk - the driver is merely software that is programmed to make specific I/O requests to a specific subset of the memory map (wherever the IDE bus lives, which is specific to the the hardware board). When the IDE controller hardware receives those I/O requests it then performs the appropriate actions (via DMA transfers or other hardware action) to copy data from memory to persistent storage (writing to disk) or from persistent storage to memory (reading from the disk).

【驱动程序的本质】
驱动程序只是软件,它的作用是对内存映射的特定子集发出特定的I / O请求。
IDE控制器收到请求后就会执行读写磁盘动作。
硬盘在初始化时,驱动程序会持续访问IDE硬件的内存映射以便进行分区和格式化文件系统。

When you first buy bare-metal hardware, your disk is uninitialized; you install the OS that uses the driver to make enough bare-metal accesses to the IDE hardware portion of the memory map to then turn the disk into a set of partitions and filesystems on top of those partitions.

So, how does QEMU emulate this? In the big memory map it provides to the guest, it emulates an IDE disk at the same address as bare-metal would. When the guest OS driver issues particular memory writes to the IDE control registers in order to copy data from memory to persistent storage, the QEMU accelerator traps accesses to that memory region, and passes the request on to the QEMU IDE controller device model. The device model then parses the I/O requests, and emulates them by issuing host system calls. The result is that guest memory is copied into host storage.

【qemu实现模拟的机制】
在qemu提供给客户机的大内存映射中,它会像在裸机上一样将IDE硬件映射至相同位置。
当客户机访问这部分内存以便写磁盘时,qemu的加速器会捕获访问,
并且将请求传送给qemu的IDE控制器设备模型,
模型会解析IO请求并且通过宿主机的系统调用来模拟指令。
最终将客户机的内存拷贝至宿主机的磁盘中。

On the host side, the easiest way to emulate persistent storage is via treating a file in the host filesystem as raw data (a 1:1 mapping of offsets in the host file to disk offsets being accessed by the guest driver), but QEMU actually has the ability to glue together a lot of different host formats (raw,qcow2, qed, vhdx, …) and protocols (file system, block device, NBDCephgluster, …) where any combination of host format and protocol can serve as the backend that is then tied to the QEMU emulation providing the guest device.

Thus, when you tell QEMU to use a host qcow2 file, the guest does not have to know qcow2, but merely has its normal driver make the same register reads and writes as it would on bare metal, which cause vmexits into QEMU code, then QEMU maps those accesses into reads and writes in the appropriate offsets of the qcow2 file. When you first install the guest, all the guest sees is a blank uninitialized linear disk (regardless of whether that disk is linear in the host, as in raw format, or optimized for random access, as in the qcow2 format); it is up to the guest OS to decide how to partition its view of the hardware and install filesystems on top of that, and QEMU does not care what filesystems the guest is using, only what pattern of raw disk I/O register control sequences are issued.

The next thing to realize is that emulating IDE is not always the most efficient. Every time the guest writes to the control registers, it has to go through special handling, and vmexits slow down emulation. Of course, different hardware models have different performance characteristics when virtualized. In general, however, what works best for real hardware does not necessarily work best for virtualization, and until recently, hardware was not designed to operate fast when emulated by software such as QEMU. Therefore, QEMU includes paravirtualized devices that are designed specifically for this purpose.

【qemu使用virtIO缓解性能问题】
不同的硬件模型在虚拟化时具有不同的性能。 
然而,一般而言,对于真实硬件最有效的方法并不一定适合虚拟化。
直到最近,硬件还没有设计为在由QEMU等软件模拟时实现快速运行。
因此,QEMU专门设计了半虚拟化设备以缓解性能问题。

The meaning of “paravirtualization” here is slightly different from the original one of “virtualization through cooperation between the guest and host”. The QEMU developers have produced a specification for a set of hardware registers and the behavior for those registers which are designed to result in the minimum number of vmexits possible while still accomplishing what a hard disk must do, namely, transferring data between normal guest memory and persistent storage. This specification is called virtio; using it requires installation of a virtio driver in the guest. While no physical device exists that follows the same register layout as virtio, the concept is the same: a virtio disk behaves like a memory-mapped register bank, where the guest OS driver then knows what sequence of register commands to write into that bank to cause data to be copied in and out of other guest memory. Much of the speedups in virtio come by its design - the guest sets aside a portion of regular memory for the bulk of its command queue, and only has to kick a single register to then tell QEMU to read the command queue (fewer mapped register accesses mean fewer vmexits), coupled with handshaking guarantees that the guest driver won’t be changing the normal memory while QEMU is acting on it.

【virtIO加速原理】
virtIO的实现方式与上述IDE磁盘读写的机制差不多,
它使用了一块未被任何物理设备使用的寄存器映射,
客户机中的virtIO驱动会操作这块寄存器。
在virtIO实现中,客户机会预留一块内存区域作为命令队列,
通过执行一次寄存器访问就可以通知qemu要执行队列中的命令(大大减少vmexit的次数)。
在qemu操作该队列时,它会通知客户机的driver不要再更改该队列。

As an aside顺便说一句, just like recent hardware is fairly efficient to emulate, virtio is evolving to be more efficient to implement in hardware, of course without sacrificing performance for emulation or virtualization. Therefore, in the future, you could stumble upon physical virtio devices as well.

In a similar vein与此相类似, many operating systems have support for a number of network cards, a common example being the e1000 card on the PCI bus. On bare metal, an OS will probe PCI space, see that a bank of registers with the signature for e1000 is populated, and load the driver that then knows what register sequences to write in order to let the hardware card transfer network traffic in and out of the guest. So QEMU has, as one of its many network card emulations, an e1000 device, which is mapped to the same guest memory region as a real one would live on bare metal.

操作系统将探测PCI空间,看看是否填充了一组带有e1000签名的寄存器。

And once again, the e1000 register layout tends to require a lot of register writes (and thus vmexits) for the amount of work the hardware performs, so the QEMU developers have added the virtio-net card (a PCI hardware specification, although no bare-metal hardware exists yet that actually implements it), such that installing a virtio-net driver in the guest OS can then minimize the number of vmexits while still getting the same side-effects of sending network traffic. If you tell QEMU to start a guest with a virtio-net card, then the guest OS will probe PCI space and see a bank of registers with the virtio-net signature, and load the appropriate driver like it would for any other PCI hardware.

virtio-net的PCI硬件规范,虽然还没有实际实现它的裸机硬件。
在虚拟机中查看:
# ethtool -i eth0
driver: virtio_net

In summary, even though QEMU was first written as a way of emulating hardware memory maps in order to virtualize a guest OS, it turns out that the fastest virtualization also depends on virtual hardware: a memory map of registers with particular documented side effects that has no bare-metal counterpart. And at the end of the day, all virtualization really means is running a particular set of assembly instructions (the guest OS) to manipulate locations within a giant memory map for causing a particular set of side effects, where QEMU is just a user-space application providing a memory map and mimicking the same side effects you would get when executing those guest instructions on the appropriate bare metal hardware.

【qemu的本质和演进】
总之,尽管QEMU最初是作为模拟硬件内存映射的一种方式编写的,以实现虚拟化,
但事实证明,最快的虚拟化还依赖于虚拟硬件:
没有裸机对应物的具有特定“副作用”的寄存器的内存映射。

所有虚拟化的本质
实际上是客户机运行一组特定的汇编指令来操纵巨型内存映射中的位置以产生一组特定的“副作用”。
QEMU只是一个用户空间应用程序,它负责提供内存映射,并模拟出在裸机硬件上执行客户机指令时的相同“副作用”。

(This post is a slight update on an email originally posted to the qemu-devel list back in July 2017).

【转】理解qemu对设备的模拟机制

标签:inter   访问   ini   通过   orm   diff   optimize   实现   程序   

原文地址:https://www.cnblogs.com/qxxnxxFight/p/11063633.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!