Showing posts with label VMware headroom. Show all posts
Showing posts with label VMware headroom. Show all posts

Wednesday, 19 July 2017

Understanding VMware - Calculating headroom in VM's (9 of 10)

So as I mentioned before the question I get asked the most when discussing VMware capacity is “How many more VMs can I fit in this cluster?”

Which is similar to asking how many balls used for a variety of sports, does it take to fill an Olympic swimming pool? Unfortunately “It depends” is not an acceptable answer for a CIO. 

The business wants a number, so as a business focused IT department an answer must be given. The key is that it’s ok to estimate. Anybody who’s compared the average business forecast to what eventually happens in reality, knows the business is ok with estimates.

So how do we figure out the number to tell the business.
If we calculate the size of our average VM, and the size of the cluster, then divide one by the other and that’s the total number of VMs, now just take off the current number of VMs right?

Sounds simple. Except we need to define what’s the size of our cluster. Are we allowing for one or more hosts to fail? Can we identify the size of largest host(s)?

We also need to decide what metrics we are going to size on. Do you want to size on vCPUs to Core ratio, or MHz CPU and MB Memory, or some other limitation?

Can you then calculate what your average VM is at every point during the day and pick the peak or a percentile?

Would you decide to agree on an average size for Small, Medium, and Large VMs, then calculate the number of each currently and extrapolate with the existing ratios?

You have to be able to answer these questions before you can start to do the calculations.

Clearly you need data to work with for this. You can manually read info out of vSphere client, and note it down. But I’d suggest you find a tool to automate the data collection.

You’ll need to review the data and make sure it’s a good period to be using for the exercise.

E.g. not during windows updates and a reboot of every VM!

You should also try to include the known projects. You  might have 1000 VMs currently, but  if there are 250 planned for implementation in the next 6 months you’ll want to take them into account.


Here’s an example of a good peak (circled).



The actual peak is a blip that we don’t want to size on. But the circled peak is a nice clean example, that’s in line with other days.



Given the size of the cluster in MB Memory and MHz CPU, the number of current VMs, the size of an average VM, and the size of the largest host I put together a spreadsheet.

There’s a calculation that takes the size of the largest host off the size of the cluster, then calculates 90% of the result. Then calculates the number of average VMs that will fit, and the space available in average VMs for both Memory and CPU. The smallest of the values is then displayed along with either Memory or CPU as the “Bound By” metric.

Conditional formatting on a cell displaying the number of VMs available sets a Red, Amber, Green status.


By including a sheet that can contain the number of VMs needed for future projects, then I calculated a second value including them.



Exporting some of the values I calculated on a regular basis, enables me to then trend over time, the number of VMs that are available in the cluster. Still taking into account the largest host failing, and 90% of the remaining capacity being the max.

In this case, activity was actually falling overtime, and as such the number of VMs available in the cluster was increasing in terms of CPU capacity.

On Friday I'll do a Round-Up of my series and hope to see some of you at my webinar today.

Phil Bell
Consultant




Thursday, 14 July 2016

VMware, Virtual Center Headroom (17 of 17) Capacity Management, Telling the Story

Today I’ll show you one final report on VMware, which looks at headroom available in the Virtual Center.

In the example below we’re showing CPU usage. The average CPU usage is illustrated by the green bars, the light blue represents the amount of CPU available across this particular host and the dark blue line is the total CPU power available.
 
VMware – Virtual Center Headroom
 
 
 
We have aggregated all the hosts up within the cluster to see this information.
We can see from the green area at the bottom how much headroom we have to the blue line at the top, although actually in this case we will be comparing it to the turquoise area as this is the amount of CPU available for the VM’s.
This is due to the headroom taken by VMkernel which has to be taken in to consideration and explains the difference between the dark blue line and the turquoise area.
 
Summary

To summarize my blog series, when reporting:

        Stick to the facts

        Elevator talk

        Show as much information as needs to be shown

        Display the information appropriate for the audience

        Talk the language for the audience

….Tell the Story
Hope you've enjoyed the series, if you have any questions feel free to ask. If you're interested in VMware Capacity Management don't forget to book on to our workshop http://www.metron-athene.com/services/online-workshops/index.html#vmwarevsphere
Charles Johnson
Principal Consultant


Tuesday, 12 July 2016

VMware Reports (16 of 17) Capacity Management, Telling the Story

Let’s take a look at some examples of VMware reports.

The first report below looks at the CPU usage of clusters in MHz. It is a simple chart and this makes it very easy to understand.
 
VMware – CPU Usage all Clusters

You can immediately see who the biggest user of the CPU is, Core site 01.
 
The next example is a trend report on VMware resource pool memory usage.
The light blue indicates the amount of memory reserved and the dark blue line indicates the amount of memory used within that reservation. This information is then trended going forward, allowing you to see at which point in time the required memory is going to exceed the memory reservation.
 
VMware – Resource Pool Memory Usage Trend
 
A trend report like this is useful as an early warning system, you know when problems are likely to ensue and can do something to resolve this before it becomes an issue.

We need to keep ahead of the game and setting up simple but effective reports, provided automatically, will help you to do this and to report back to the business regarding requirements well in advance.

On Thursday I’ll show you one final report on VMware, which looks at headroom available in the Virtual Center, in the meantime take a look at out VMware vSphere Capacity and Performance Essentials workshop.
http://www.metron-athene.com/services/online-workshops/index.html#vmwarevsphere

Charles Johnson
Principal Consultant

Monday, 23 November 2015

VMware – Virtual Center Headroom (17 of 17) Capacity Management, Telling the Story


Today I’ll show you one final report on Vmware, which looks at headroom available in the Virtual Center.

In the example below we’re showing CPU usage. The average CPU usage is illustrated by the green bars, the light blue represents the amount of CPU available across this particular host and the dark blue line is the total CPU power available.

VMware – Virtual Center Headroom
We have aggregated all the hosts up within the cluster to see this information.
We can see from the green area at the bottom how much headroom we have to the blue line at the top,although actually in this case we will be comparing it to the turquoise area as this is the amount of CPU available for the VM’s.
This is due to the headroom taken by VMkernel which has to be taken in to consideration and explains the difference between the dark blue line and the turquoise area.
Summary

To summarize my blog series, when reporting:

        Stick to the facts

        Elevator talk

        Show as much information as needs to be shown

        Display the information appropriate for the audience

        Talk the language for the audience

….Tell the Story

Sign up to our Community and get access to all our Resources, on-demand webinars, white papers and more....
http://www.metron-athene.com/_resources/on-demand-webinars/login.asp

Charles Johnson
Principal Consultant