Monday 30 June 2014

Time Slicing - Top 5 performance & capacity concerns for VMware

As discussed in Friday's blog the large difference between what the OS thinks is happening and what is really happening all comes down to time slicing. 
 
In a typical VMware host we have more vCPUs assigned to Virtual Machines(VM's) than we do physical cores. 

The processing time of the cores has to be shared among the vCPUs. Cores are shared between vCPUs in time slices, 1 vCPU to 1 core at any point in time.

More vCPUs lead to more time slicing. The more vCPUs we have the less time each can be on the core, and therefore the slower time passes for that VM.  To keep the VM in time extra time interrupts are sent in quick succession, so time passes slowly and then very fast.


A VM with multiple vCPU's has a distinct disadvantage when scheduling CPU cycles. A 1 vCPU VM can execute instructions as soon as a single core is available. If a VM has 4 vCPU's, then it literally cannot execute any instructions until 4 cores are available and I'll be looking at this in more detail on Wednesday when I'll be taking a look at Ready Time.

In the meantime don't forget to sign up to our webinar 'Taking a Trip down VMware vSphere Memory Lane' which looks at how memory is used in a VMware vSphere environment http://www.metron-athene.com/services/training/webinars/index.html

Phil Bell
Consultant

No comments:

Post a Comment