Showing posts with label VMware memory overhead. Show all posts
Showing posts with label VMware memory overhead. Show all posts

Friday, 11 November 2016

Idle VMs - Why should we care? (2 of 3)

In my previous blog I mentioned the term VM Sprawl and this is where Idle VMs are likely to factor. 

Often VMs are provisioned to support short term projects,  for development/test processes or for applications which have now been decommissioned.  Now idle, they’re left alone, not bothering anyone and therefore not on the Capacity and Performance teams radar.

Which brings us back to the question.  Idle VMs - Why should we care? 
We should care, for a number of reasons but let's start with the impact on CPU utilization.

When VMs are powered on and running, timer interrupts have to be delivered from the host CPU to the VM.  The total number of timer interrupts being delivered depends on the following factors:

·       VMs running symmetric multiprocessing (SMP), hardware abstraction layers (HALs)/kernels require more timer interrupts than those running Uniprocessor HALs/Kernels.

·       How many virtual CPUs (vCPUs) the VM has.

Delivering many virtual timer interrupts can negatively impact on the performance of the VM and can also increase host CPU consumption.  This can be mitigated however, by reducing the number of vCPUs which reduces the timer interrupts and also the amount of co-scheduling overhead (check CPU Ready Time). 

Then there's the Memory management of Idle VMs.  Each powered on VM incurs Memory Overhead.   The Memory Overhead includes space reserved for the VM frame buffer and various virtualization data structures, such as Shadow Page Tables (using Software Virtualization) or Nested Page Tables (using Hardware Virtualization).  This also depends on the number of vCPUs and the configured memory granted to the VM.

We’ll have a look at a few more reasons to care on Monday, in the meantime why not complete our Capacity Management Maturity Survey and find out where you fall on the maturity scale. http://www.metron-athene.com/_capacity-management-maturity-survey/survey.asp
Jamie Baker
Principal Consultant

Monday, 24 October 2016

5 Top Performance and Capacity Concerns for VMware - Monitoring Memory

Memory still seems to be the item that prompts most upgrades, with VM’s running out of memory before running out of vCPU.

It’s not just a question of how much of it is being used as there are different ways of monitoring it. Some of the things that you are going to need to consider are:

       Reservations

       Limits

       Ballooning

       Shared Pages

       Active Memory

       Memory Available for VMs

VM Memory Occupancy

In terms of occupancy the sorts of things that you will want to look at are:

       Average Memory overhead

       Average Memory used by the VM(active memory)

       Average Memory shared

       Average amount of host memory consumed by the VM

       Average memory granted to the VM


In this instance we can see that the pink area is active memory and we can note that the average amount of host memory used by this VM increases at certain points in the chart.
VM Memory Performance
It's useful to produce a performance graph for memory where you can compare:
       Average memory reclaimed
       Average memory swapped
       Memory limit
       Memory reservation
       Average amount of host memory consumed.
As illustrated below.


In this instance we can see that this particular VM had around 2.5gb of memory ‘stolen’ from it by the balloon driver (vmmemctrl), at the same time swapping was occurring and this could cause performance problems.
The next place to look at for memory issues is at the Cluster and I'll deal with this on Wednesday.
In the meantime don't forget to book your place on our VMware vSphere Capacity & Performance Essentials workshop taking place in December http://www.metron-athene.com/services/online-workshops/index.html
Phil Bell
Consultant