Showing posts with label monitoring. Show all posts
Showing posts with label monitoring. Show all posts

Monday, 24 October 2016

5 Top Performance and Capacity Concerns for VMware - Monitoring Memory

Memory still seems to be the item that prompts most upgrades, with VM’s running out of memory before running out of vCPU.

It’s not just a question of how much of it is being used as there are different ways of monitoring it. Some of the things that you are going to need to consider are:

       Reservations

       Limits

       Ballooning

       Shared Pages

       Active Memory

       Memory Available for VMs

VM Memory Occupancy

In terms of occupancy the sorts of things that you will want to look at are:

       Average Memory overhead

       Average Memory used by the VM(active memory)

       Average Memory shared

       Average amount of host memory consumed by the VM

       Average memory granted to the VM


In this instance we can see that the pink area is active memory and we can note that the average amount of host memory used by this VM increases at certain points in the chart.
VM Memory Performance
It's useful to produce a performance graph for memory where you can compare:
       Average memory reclaimed
       Average memory swapped
       Memory limit
       Memory reservation
       Average amount of host memory consumed.
As illustrated below.


In this instance we can see that this particular VM had around 2.5gb of memory ‘stolen’ from it by the balloon driver (vmmemctrl), at the same time swapping was occurring and this could cause performance problems.
The next place to look at for memory issues is at the Cluster and I'll deal with this on Wednesday.
In the meantime don't forget to book your place on our VMware vSphere Capacity & Performance Essentials workshop taking place in December http://www.metron-athene.com/services/online-workshops/index.html
Phil Bell
Consultant

Monday, 5 September 2016

How to monitor CPU - Windows Server Capacity Management 101(9 of 12)

As promised today we'll be looking at how to monitor CPU.

Thresholds

When dealing with thresholds there is no one size fits all but a good rule of thumb is 70% for a warning and 85% for an alarm these can and should be tweaked when you have a better idea of performance thresholds for your CPU.

Additionally it is good to have a thresholds in place for when a CPU is being under-utilized maybe threshold for 20% and 10% this lets you know which machine could be pushed harder.

Trends

When setting up a trend, you have to remember the longer the trend the less reliable it is. A good rule of thumb for trend is 3 months, as this gives a reasonably reliable trend and also lets you know within time to make a hardware change.

Reports

CPU Total Utilization Estd% - Report Example



Above is an example of an Estimated CPU core busy over a month for my computer with a trend going forward 1 month, you can see quickly that the trend line is going down. This kind of chart is very simple to create with a capacity management tool like athene®.

On Wednesday I'll be dealing with Memory and how to monitor this. Don't forget to take a look at our workshops, there are some great ones coming up soon
http://www.metron-athene.com/services/online-workshops/index.html

Josh Worth
Consultant




Wednesday, 20 April 2016

What should we be monitoring? - Top 5 Key Capacity Management Concerns for UNIX/Linux (10 of 12)

Following on from my previous blog on Big Data this is relatively new technology and therefore knowledge around performance tuning is immature.  Our instinct tells us that we monitor the systems as a Cluster, how much CPU and Memory is being used with the local storage being monitored both individually and as one aggregated piece of storage.  Metrics such as I/O response times, file system capacity and usage are important, to name a few.
What are the challenges?

Big Data Capacity Challenges

So with Big Data technology being relatively new with limited knowledge, our challenges are:

      Working with the business to predict usage - so we can produce accurate representations of future system and storage usage.  This is normally quite a challenge for more established systems and applications so it we have to bear in mind that getting this information and validating it will not be easy.

      New technology - limited knowledge around performance tuning

      Very dynamic environment - which provides the challenge to be able to configure, monitor, and track any service changes to be able to provide effective Capacity Management for Big Data.

      Multiple tuning options - that can greatly affect the utilization/performance of systems
What key capacity metrics should you be monitoring for your Big Data environment?
Find out in my next blog and ask us about our Unix Capacity & Performance Essentials Workshop.
http://www.metron-athene.com/services/online-workshops/index.html
Jamie Baker
Principal Consultant