Separation between vCPUs and performance of KVM VMs

The web is filled with « performance tests » that show the performance of a virtualized system versus the bare-metal system.

I made this test to get an idea of the impact of high CPU usage on one guest on the other guests of an hypervisor (in this case, KVM). Especially when the host is not overloaded/overcommitted.

For this very simple test, I used ;

  • A KVM based hypervisor, with 2 Intel Xeon E5-2650-v2 CPUs, 16 cores each,
    Total 32 cores, 32 vCPU you can give to guests.
  • One VM (guest) is windows 2012, 2 cpu with 8 cores (16 vCPUs)
  • One VM (guest) is CentoS6.7, assigned to 2 cpu with 4 cores (8 vCPUs)

I wanted to get an hypervisor that is roughly twice more powerful than the guests, and put it in a situation where it should be able to completely separate the CPU usage of guests.

Before my actual « separation » test, I wanted to make one of those boring « virtual vs bare metal » tests. I used a completely idle hypervisor, and benchmarked a single CPU with sysbench. I then did the same on a idle CentOS 6.7 guest, running as the only VM on this idle host. The vCPU of the guest turns out to be 0,67% slower than the CPU of the hosts. The point here is to show that, unsurprisingly, there is very little difference between the CPU performance of the guest and the host when the hypervisor is idle.

The test

I heavily loaded the CPUs of the Windows guest, remember it’s been assigned to 50% of the CPU power of the host/hypervisor. I’m using something that only uses CPUs and RAM (Seti@Home). Windows and the hypervisor confirm that this machine now runs at 100% CPU ( on 16 cores ) with close to no IO at all.

Then, on the other guest (CentOS6.7), I made a « sysbench » on a single CPU  (multi-thread=1) – remember this host is assigned to a maximum of 25% of the cores of the host, and that the remaining 25% of the cores are not assigned at all. Basically, the host is roughly 50% idle.

  • Sysbench on a single vCPU on Linux, when SETI@Home is off (0% CPU) on the Windows VM
  • Sysbench on a single vCPU Linux, when SETI@Home is activated and using 100% CPU (on 16 vCPUs) on the windows VM
  • My results show an important impact of the « 100% CPU guest » (50% cpu host) on the vCPU speed available for the other hosts: the CPU of the Linux guest gets 6.75% slower when the other guest is using CPU power, and this number is steady and always reproductible

Then, I’ve assigned 16 vCPU to the linux machine, so the new test is ;

  1. SETI@HOME on Windows using 100% of 16 vCPUs (32 cores total on the host).
  2. Sysbench with multi-thread 16, using 100% of 16 vCPU too.
  3. Basically the system is split in two ; each guest is running at 100% CPU, each guest is using 50% of the cores of the host, and I’ll compare the speed of linux with and without the windows host running.

This time, performance is degraded by 12% on the linux guest linux when SETI@home is activated on the windows guest.


These results came a bit as a surprise to me ; first of all I wasn’t expecting a 7% performance drop on the vCPU power when only half of the cores of the hypervisor are in use. I was expecting less, more like a few percents.

On the other hand, I’m surprised that when the host is used at maximum capacity, the « permeability » between guest remains acceptable ( 12% slower ). This time, the hypervisor has no remaining power at all !

Of course, performance must also be degraded on the windows guest, but I wanted to have a view on the « remaining cpu power ». Basically, if you intend to have very busy servers, two single cores bare metal boxes would perform at least 12% faster than a dual core hypervisor with these two guests virtualized on it.

I know this is a very simple test, but surprisingly very few people seem to be interested in the impact of one guest on another. It would also be nice to make these tests on VMWare or XEN – they were done with KVM.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *