Wednesday 1 June 2011

Basics of Memory Management - Part 2

In my previous article I quickly went through the main elements and logic of the memory management. You will definitely want to read it before you start reading this article. Today I would like to go deeper into the memory management principles, and particularly of the vSphere memory.

Just to recap the essentials of the previous article - there are 2 main elements of Memory Management Unit (MMU): Page Table Walker and Translate Lookaside Buffer. When an application sends a request to specific memory address it uses Virtual Address (VA) of memory. MMU has to walk through Page Table, find the corresponding Physical Address (PA) translation for VA and put this pair (VA -- PA) to TLB. This will speed up the consequent memory accesses to the same memory address areas. Since Physical Address (PA) of Virtual  Machine is still a virtual address there is need to additional translation layer between PA and actual memory address of the ESX host, that is Host Address (HA).



There are 2 memory virtualization techniques in vSphere: Shadow Page Tables and Hardware Assisted MMU. For better understanding of those I need to add additional element of vSphere  - Virtual Machine Monitor. VMM is responsible for execution of all Virtual Machine instructions towards Physical CPU and Memory. There is one VMM per Virtual Machine. Vmkernel doesn't deal with such tasks. Frankly speaking, that was a big surprise for me.

Software Memory Virtualization

So, the first memory vrirtualization technique runs in software. VMM creates Shadow Page Table which consists of two types of memory address translations:
1. VA -- PA, these mappings are copied directly from VM Page Table, which is populated by virtual machine's operating system
2. PA -- HA, this one is managed by VMM.
Every time an application in a guest OS requests access to VA, the MMU is instructed to look directly into the Shadow Page Table and find the appropriate actual Host Address of memory. Vmware assures that this proccess of memory addresses mapping runs almost as fast as it runs on a physical server. 
However, there are other types of memory requests that cannot be executed at the same pace as in physical server. For instance, every time there is a change in Virtual Machine Page Table, the VMM has to intercept this change and update the corresponding part of Shadow Page Table. Another good example is the situation when an Application makes the very first request to memory. Since there is no mapping yet for this VA, the VMM has to create new mapping in Shadow Page Table, thus slowing down first access to memory. Lastly, Shadow Page Tables consume memory. 

Hardware Assisted Memory Virtualization

Currenly, there are 2 main competing technologies of MMU Virtualization on the market. The first one was presented by Intel in Nehalem generation of CPUs, and this new functionality is called Extended Page Tables (EPT). Another one was build by AMD in its "Barcelona" family of CPUs, and this one is called Rapid Virtualization Indexing (RVI). Both technologies do almost the same and differ just in details that are not really important to us. The main advantage of these new technologies is that new MMU has two page table walkers. First one is responsible for looking for VA -- PA matches, which is managed by Virtual Machine. The second one looks for PA -- HA matches, and this is managed by VMM.  Such type of table that contains PA -- HA translations is called Extended or Nested Page Table. Once the two corresponding matches are found, the MMU put the resulting VA -- HA match into TLB. Since both page tables are exposed to MMU there is no more need for Shadow Pages Tables. Thus, the overhead of synchronization of virtual machine page table and VMM page table is eliminated. Therefore, when there is a change in virtual machine page table, the VMM doesn't need to be involved. 
The only drawback of such solution is a cost of TLB miss. Every time the  MMU can't find VA -- HA mapping in TLB it has to walk through 2 page tables - virtual machine's one and extended page table. That's where you can gain high performance improvement when using Large Pages, as it significantly reduces number of TLB misses.
You may want to check performance comparison of old and new technologies of memory virtualization here. And this is an conclusion of that document:

Intel EPT-enabled CPUs offload a significant part of the VMM's MMU virtualization responsibilities to the hardware, resulting in higher performance. Results of experiments done on this platform indicate that the current VMware VMM leverages these features quite well, resulting in performance gains of up to 48% for MMU-intensive benchmarks and up to 600% for MMU-intensive microbenchmarks.We recommend that TLB-intensive workloads make extensive use of large pages to mitigate the higher cost of a TLB miss.

I hope this article is overburdened with too many details and I could give your the main idea of benefits of Hardware Assisted MMU. If you would like to read more details I could recommend you the following document - Software and Hardware Techniques for x86 Virtualization and very detailed and nicely presented article - Hardware Virtualization: the Nuts and Bolts


If you find this post useful please share it with any of the buttons below. 

No comments:

Post a Comment