Following on from Monday's blog, the
effect we saw between the OS and VMware is caused by time slicing. In a typical
VMware host we have more vCPUs
assigned to VMs than we do physical
cores. A situation known as over-provisioning, and
to some extent the original purpose of virtualization.
The processing time of the physical cores
has to be shared among
the vCPUs in the VMs. The
more vCPUs we have the less time each can be on the core, and therefore the slower
time passes for that VM. To keep
the VM in time extra time interrupts are sent in quick succession. So time passes slowly and then very fast.
Time is no longer a constant, but the OS doesn’t know
that. So the safest approach is to avoid using anything from the OS that
involves an element of time.
Significant improvements have
been made in this area over the releases of VMware. VMware tools
has a number of tricks to try and make the OS metrics
as close as possible,
as well as improved co-scheduling of CPUs. But the basic concept remains
in place. Later I will discuss how it can be ok to
use averages and estimates for reporting on the future, when we have the choice
of accurate data from VMware,
or less accurate
data from the OS.
I would suggest
taking accuracy where
we can easily
do so, has
to be the better option.
On Friday I'll be looking at the 5 key VMware metrics to monitor, in the meantime take a look at the great selection of white papers and on-demand webinars on VMware in our Resources section.
Phil Bell
Consultant
No comments:
Post a Comment