Showing posts with label Ready Time. Show all posts
Showing posts with label Ready Time. Show all posts

Monday, 10 July 2017

Understanding VMware Capacity - Ready Time (4 of 10)

Imagine you are driving a car, and you are stationary. There could be several reasons for this. You may be waiting to pick up someone, you may have stopped to take a phone call, or it might be that you have stopped at a red light. The 1st two of these (pick up, phone), you decided to stop the car to perform a task. But in the 3rd case, the red light is stopping you doing something you want to do. You spend the whole time at the red light ready to move away as soon as you get a green light. That time you spend waiting at a red light  is ready time. When a VM wants to use the processor, but is stopped from doing so it accumulates ready time. This has a direct impact on the performance of the VM.

Ready Time can be accumulated even if there are spare CPU MHz available. For any processing to happen all the vCPUs assigned to the VM must be running at the same time. This means if you have a 4 vCPU VM, all 4 vCPUs need available cores or hyperthreads to run. So the fewer vCPUs a VM has, the more likely it is to be able to get onto the processors. You can reduce contention by having as few vCPUs as possible in each VM. And if you monitor CPU Threads, vCPUs and Ready Time for the whole Cluster, then you’ll be able to see if there is a correlation between increasing vCPU numbers and Ready Time.


Here is a chart showing data collected for a VM. In each hour the VM is doing ~500 seconds of processing. The VM has 4 vCPUs. 
Despite just doing 500 seconds of processing, the ready time accumulated is between 
~1200 and ~1500 seconds. So anything being processed spends 3 times as long waiting to be processed, as it does actually being processed. i.e. 1 second of processing could take 4 seconds to complete.


Now lets look at a VM on the same host, doing the same processing on the same day. Again we can see ~500 seconds of processing in each hour interval. But this time we only have 2vCPUs. The ready time is about ~150 seconds. i.e. 1 second of processing takes 1.3 seconds.
By reducing the number of vCPUs in the first VM, we could improve transaction times to somewhere between a quarter and a third of their current time.

Here’s a short video to show the effect of what is happening inside the host to schedule the physical CPUs/cores to the vCPUs of the VMs. Clearly most hosts have more than 4 consecutive threads that can be processed. But let’s keep this simple to follow.




1.      VMs that are “ready” are moved onto the Threads.

2.      There is not enough space for all the vCPUs in all the VMs. So some are left behind. (CPU Utilization = 75%, capacity used = 100%)

3.      If a single vCPU VM finishes processing, the spare Threads can now be used to process a 2 vCPU vm. (CPU Utilization = 100%)

4.      A 4 vCPU VM needs to process.

5.      Even if the 2 single vCPU VMs finish processing, the 4 vCPU VM cannot use the CPU available.

6.      And while it’s accumulating Ready Time, other single vCPU VMs are able to take advantage of the available Threads

7.      Even if we end up in a situation where only a single vCPU is being used, the 4 vCPU VM cannot do any processing. (CPU utilization = 25%)



As mentioned when we discussed time slicing, improvements have been made in the area  of co-scheduling with each release of VMware. Amongst other things the time between individual CPUs being scheduled onto the physical CPUs has increased, allowing for greater flexibility in scheduling VMs with large number of vCPUs.
Acceptable performance is seen from larger VMs.

Along with Ready Time, there is also a Co-Stop metric. Ready Time can be accumulated against any VM. Co-Stop is specific to VMs with 2 or more vCPUs and relates to the time “stopped” due to Co-Scheduling contention. E.g. One or more vCPUs has been allocated a physical CPU, but we are stopped waiting on other vCPUs to be scheduled.

Imagine the bottom of a “ready” VM displayed, sliding across to a thread and the top sliding across as other VMs move off the Threads. So the VM is no longer rigid it’s more of an elastic band.
Phil Bell
Consultant


Friday, 16 December 2016

Virtualization Oversubscription - What’s so scary? VMWare vCPU Co-Scheduling & Ready Time (14 of 20)

Today I’ll explain the effect of what is happening inside the host to schedule the physical CPUs/cores to the vCPUs of the VMs.  Clearly most hosts have more than 4 consecutive threads that can be processed but let’s keep this simple to follow.


·        VMs that are “ready” are moved onto the Threads.
·        There is not enough space for all the vCPUs in all the VMs so some are left behind.  (CPU Utilization = 75%, capacity used = 100%)
·        If a single vCPU VM finishes processing, the spare Threads can now be used to process a 2 vCPU VM. (CPU Utilization = 100%)
·        A 4 vCPU VM needs to process.
·        Even if the 2 single vCPU VMs finish processing, the 4 vCPU VM cannot use the CPU available and while it’s accumulating Ready Time, other single vCPU VMs are able to take advantage of the available Threads
·        Even if we end up in a situation where only a single vCPU is being used, the 4 vCPU VM cannot do any processing. (CPU Utilization = 25%)
As mentioned when we discussed time slicing, improvements have been made in the area of co-scheduling with each release of VMware.  Among other things the time between individual CPUs being scheduled onto the physical CPUs has increased, allowing for greater flexibility in scheduling VMs with large number of vCPUs.  Acceptable performance is seen from larger VMs.

Along with Ready Time, there is also a Co-Stop metric.  Ready Time can be accumulated against any VM.  Co-Stop is specific to VMs with 2 or more vCPUs and relates to the time “stopped” due to Co-Scheduling contention.  E.g. One or more vCPUs has been allocated a physical CPU, but we are stopped waiting on other vCPUs to be scheduled.
Imagine the bottom of a “ready” VM displayed, sliding across to a thread and the top sliding across as other VMs move off the Threads, so the VM is no longer rigid it’s more of an elastic band.  
VMs and Resource Pools can be allocated Reservations, Shares and Limits and I'll be taking a look at these on Monday.
If you haven't already done so don't forget to sign up to get free access to our Resources, there are some great VMware white papers and on-demand webinars on there.
Phil Bell
Consultant

Friday, 14 October 2016

5 Top Performance and Capacity Concerns for VMware - Ready Time

As I mentioned on Wednesday there are 3 states which the VM can be in:



Threads – being processed and allocated to a thread.

Ready – in a ready state where they wish to process but aren’t able to.

Idle – where they exist but don’t need to be doing anything at this time.
In the diagram below you can see that work has moved over the threads to be processed and there is some available headroom. Work that is waiting to be processed requires 2 CPU’s so is unable to fit and creates wasted space that we are unable to use at this time.



We need to remove a VM before we can put a 2 CPU VM on to a thread and remain 100% busy.

In the meantime other VM’s are coming along and we now have a 4vCPU VM accumulating Ready Time.

2 VM’s moves off but the 4vCPU VM waiting cannot move on as there are not enough vCPU’s available.


It has to wait and other work moves ahead of it to process.


Even when 3vCPU’s are available it is still unable to process and will be ‘queue jumped’ by other VM’s who require less vCPU’s.


Hopefully that is a clear illustration of why it makes sense to reduce contention by having as few vCPUs as possible in each VM.
Ready Time impacts on performance and needs to be monitored. On Monday I'll be dealing with Monitoring Memory.
Phil Bell
Consultant

Wednesday, 12 October 2016

5 Top Performance and Capacity Concerns for VMware - Ready Time

Imagine you are driving a car, and you are stationary, there could be several reasons for this.  You may be waiting to pick someone up, you may have stopped to take a phone call, or it might be that you have stopped at a red light.  The first two of these (pick up, phone) you have decided to stop the car to perform a task.  In the third instance the red light is stopping you doing something you want to do.  In fact you spend the whole time at the red light ready to move away as soon as the light turns to green.  That time is ready time.

When a VM wants to use the processor, but is stopped from doing so.  It accumulates ready time and this has a direct impact on performance.
For any processing to happen all the vCPUs assigned to the VM must be running at the same time.  This means if you have a 4 vCPU all 4 need available cores or hyperthreads to run.  So the fewer vCPUs a VM has, the more likely it is to be able to get onto the processors. 

To avoid Ready Time
You can reduce contention by having as few vCPUs as possible in each VM.  If you monitor CPU Threads, vCPUs and Ready Time you’ll be able to see if there is a correlation between increasing vCPU numbers and Ready Time in your systems.

Proportion of Time: 4 vCPU VM
Below is an example of a 4vCPU VM, each doing about 500 seconds worth of real CPU time and about a 1000’s worth of Ready Time.



For every 1 second of processing the VM is waiting around 2 seconds to process, so it’s spending almost twice as long to process than it is processing. This is going to impact on the performance being experienced by the end user who is reliant on this VM.

Now let’s compare that to the proportion of time spent processing on a 2 vCPU VM. The graph below shows a 2 vCPU VM doing the same amount of work, around 500
seconds worth of real CPU time and as you can see the Ready Time is significantly less.



There are 3 states which the VM can be in and we'll take a look at these on Friday.
Don't forget to book on to our VMware vSphere Capacity & Performance Essentials workshop starting on Dec 6 http://www.metron-athene.com/services/online-workshops/index.html
Phil Bell
Consultant

Monday, 8 June 2015

Top 5 Performance and Capacity Concerns for VMware - Time Slicing and Ready Time

The effect we saw between the OS and VMware, in my blog on Friday, is caused by time slicing.  

In a typical VMware host we have more vCPUs assigned to VMs than we do physical cores. The processing time of the cores has to be shared among the vCPUs. Cores are shared between vCPUs in time slices, 1 vCPU to 1 core at any point in time.



More vCPUs lead to more time slicing. The more vCPUs we have the less time each can be on the core, and therefore the slower time passes for that VM.  To keep the VM in time extra time interrupts are sent in quick succession.  So time passes slowly and then very fast.


More time slicing equals less accurate data from the OS. 

Anything that doesn’t relate to time, such as disc occupancy should be ok to use.


Ready Time
Imagine you are driving a car, and you are stationary, there could be several reasons for this.  You may be waiting to pick someone up, you may have stopped to take a phone call, or it might be that you have stopped at a red light.  The first two of these (pick up, phone) you have decided to stop the car to perform a task.  In the third instance the red light is stopping you doing something you want to do.  In fact you spend the whole time at the red light ready to move away as soon as the light turns to green.  That time is ready time.
When a VM wants to use the processor, but is stopped from doing so.  It accumulates ready time and this has a direct impact on performance.
For any processing to happen all the vCPUs assigned to the VM must be running at the same time.  This means if you have a 4 vCPU all 4 need available cores or hyperthreads to run.  So the fewer vCPUs a VM has, the more likely it is to be able to get onto the processors. 

To avoid Ready Time
You can reduce contention by having as few vCPUs as possible in each VM.  If you monitor CPU Threads, vCPUs and Ready Time you’ll be able to see if there is a correlation between increasing vCPU numbers and Ready Time in your systems.

Proportion of Time: 4 vCPU VM

Below is an example of a 4vCPU VM, each doing about 500 seconds worth of real CPU time and about a 1000’s worth of Ready Time.

For every 1 second of processing the VM is waiting around 2 seconds to process, so it’s spending almost twice as long to process than it is processing. This is going to impact on the performance being experienced by the end user who is reliant on this VM.
Now let’s compare that to the proportion of time spent processing on a 2 vCPU VM. The graph below shows a 2 vCPU VM doing the same amount of work, around 500 seconds worth of real CPU time and as you can see the Ready Time is significantly less.

There are 3 states which the VM can be in:


Threads – being processed and allocated to a thread.
Ready – in a ready state where they wish to process but aren’t able to.
Idle – where they exist but don’t need to be doing anything at this time.

In the diagram below you can see that work has moved over the threads to be processed and there is some available headroom. Work that is waiting to be processed requires 2 CPU’s so is unable to fit and creates wasted space that we are unable to use at this time.


We need to remove a VM before we can put a 2 CPU VM on to a thread and remain 100% busy.


In the meantime other VM’s are coming along and we now have a 4vCPU VM accumulating Ready Time.



2 VM’s moves off but the 4vCPU VM waiting cannot move on as there are not enough vCPU’s available.


It has to wait and other work moves ahead of it to process.




Even when 3vCPU’s are available it is still unable to process and will be ‘queue jumped’ by other VM’s who require less vCPU’s.


Hopefully that is a clear illustration of why it makes sense to reduce contention by having as few vCPUs as possible in each VM.
Ready Time impacts on performance and needs to be monitored.

On Wednesday I'll be looking at Memory performance, in the meantime don't forget to register for our 'Taking a trip down vSphere Memory Lane' webinar taking place on June 24th
http://www.metron-athene.com/services/webinars/index.html

Phil Bell
Consultant