Tag Archives: virtualization

Haibo Chen, Shanghai Jiaotong University – April 2012

haibo.jpgParallelizing Live Migration of Virtual Machines in Shared-pool Datacenter

Live migration is a key technology in today’s shared-pool datacenters. However, with the number of resources managed by a VM increases, migrating a VM among machine nodes tends to be more and more time-consuming and intrusive, and thus likely hurts application performance and even disrupts running services. Even worse, recent practices of including a local storage cache such as flash memory between a VM and its networked storage creates more challenges to live VM migration, as it is usually necessary to migrate the storage cache (which is usually in the scale of 10s to 100s GB) along with the VM itself, to avoid spending hours or even days for a VM to achieve its optimal performance.

In this project, we aim at reducing both the migration time and downtime in migrating a VM in shared-pool datacenter with local storage cache. Being aware of the existence of multiple NICs and multiple cores in typical off-the-shelf servers, we propose to parallelize the tracking of dirty pages and disk blocks in local cache as well as the transmitting of memory pages and disk blocks using multiple NICs and CPU cores. Our investigation would gain a more comprehensive understanding of live VM migration with local storage caches. The proposed approach could also significantly reduce the migration time as well as the downtime.


Randal Burns, Johns Hopkins University – November 2010

rbcrab.jpgReducing Memory and I/O Interference for Virtualized Systems and Cloud Computing

Increasingly, I/O and memory contention limit the performance of applications running on virtualized environments and the cloud. The problem is particularly acute because virtualized systems share memory and I/O resources. Processing resources can be divided by core and shared by context switching incrementally with low overhead. In contrast, workloads sharing memory and I/O interfere with each other: interleaving I/O requests destroys sequential I/O and disk head locality, memory sharing reduces cache-hit rates, and processor sharing flushes high-level caches. This project will develop mechanisms to reduce memory and I/O interference in these environments.