Friday, 30 November 2012

Top 5 Don’ts for VMware

As promised today I’ll be dealing with the TOP 5 Don’ts for VMware.


DON’T


1)       CPU Overcommit (unless over ESX Host usage is <50%).  Why?  I’m sure that most of you would have heard of CPU Ready Time?  CPU Ready Time is the time spent (msecs) that a guest vCPUs are waiting run on the ESX Hosts physical CPUs.  This wait time can occur due to the co-scheduling constraints of operating systems and a higher CPU scheduling demand due to an overcommitted number of guest vCPUs against pCPUs.  The likelihood is that if all the ESX hosts within your environment have on average a lower CPU usage demand, then overcommitting vCPUs to pCPUs is unlikely to see any significant rise in CPU Ready Time or impact on guest performance.


2)       Overcommit virtual memory to the point of heavy memory reclamation on the ESX host.  Memory overcommitment is supported within your vSphere environment by a combination of Transparent Page Sharing, memory reclamation (Ballooning & Memory Compression) and vSwp files (Swapping).  When memory reclamation takes place it incurs some memory management overhead and if DRS is enabled automatically,  an increase in the number of vMotion migrations. Performance at this point can degrade due to the increase in overhead required to manage these operations.


3)       Set CPU or Memory limits (unless absolutely necessary).  Why?  Do you really need to apply a restriction on usage to a guest or set of guests in a Resource Pool?  By limiting usage, you may unwittingly restrict the performance of a guest.  In addition, maintaining these limits incurs overhead, especially for memory, where the limits are enforced by Memory Reclamation.  A better approach is to perform some proactive monitoring to identify usage patterns and peaks, then adjust the amount of CPU (MHz) and Memory (MB) allocated to your guest virtual machine.   Where necessary guarantee resources by applying reservations.


4)       Use vSMP virtual machines when running single-threaded workloads.  Why?  vSMP virtual machines have more than one vCPU assigned.  A single-threaded workload running on your guest will not take advantage of those “extra” executable threads.  Therefore extra CPU cycles used to schedule those vCPUs will be wasted.


5)       Use 64-bit operating systems unless you are running 64-bit      applications.  Why?  64-bit virtual machines require more memory overhead than 32-bit ones.  Compare the benchmarks of 32/64-bit applications to determine whether it is necessary to use the 64-bit version.


If you want more information on performance and capacity management of VMware visit our website and sign up to be part of our community, being a community member provides you with free access to our library of white papers and podcasts. http://www.metron-athene.com/_downloads/index.html or visit our capacity management channel on YouTube http://www.youtube.com/user/35Metron?blend=1&ob=0

I'm at CMG Las Vegas next week and hope to meet up with some of you there.

Jamie Baker
Principal Consultant





Wednesday, 28 November 2012

VMware - Top 5 Do's

I’ve put together a quick list of the Top 5 Do’s and Don’ts for VMware which I hope you’ll find useful.

Today I’m starting with the Top 5 Do’s

DO


1) Select the correct operating system when creating your Virtual Machine. Why? The operating system type determines the optimal monitor mode to use, the optimal devices, such as the SCSI controller and the network adapter to use. It also specifies the correct version of VMware Tools to install.


2) Install VMware Tools on your Virtual Machine. Why? VMware Tools installs the Balloon Driver (vmmemctl.sys) which is used for virtual memory reclamation when an ESX host becomes imbalanced on memory usage, alongside optimized drivers and can enable Guest to Host Clock Synchronization to prevent Guest clock drift (Windows Only).


3) Keep vSwp files in their default location (with VM Disk files). Why? vSwp files are used to support overcommitted guest virtual memory on an ESX host. When a virtual machine is created, the vSwp file is created and its size is set to the amount of Granted Memory given to the virtual machine. Within a clustered environment, the files should be located within the shared VMFS datastore located on a FC SAN/iSCSI NAS. This is because of vMotion and the ability to migrate VM Worlds between hosts. If the vSwp files were stored on a local (ESX) datastore, when the associated guest is vMotioned to another host the corresponding vSwp file has to be copied to that host and can impact performance.


4) Disable any unused Guest CD or USB devices. Why? Because CPU cycles are being used to maintain these connections and you are effectively wasting these resources.


5) Select a guest operating system that uses fewer “ticks”. Why? To keep time, most operating system count periodic timer interrupts or “ticks”. Counting these ticks can be a real-time issue as ticks may not always be delivered on time or if a tick is lost, time falls behind. If this happens, ticks are backlogged and then the system delivers ticks faster to catch up. However, you can mitigate these issues by using guest operating systems which use fewer ticks. Windows (66Hz to 100Hz) or Linux (250Hz). It is also recommended to use NTP for Guest to Host Clock Synchronization, KB1006427.


On Friday I’ll go through the Top 5 Don’ts.
If you want more detailed information on my Top 10 performance metrics to identify bottlenecks in VMware take a look at my video http://www.youtube.com/watch?v=Gf90Kn_ZVdc&feature=plcp

Jamie Baker
Principal Consultant

Friday, 23 November 2012

Getting your VMware memory allocation wrong can cost you more than just money

VMware have made some interesting changes with regard to the licensing of their vRAM technology. The previous licensing model enforced RAM restrictions and limitations on users of vSphere 5.

Now the previous vRAM licensing limits have gone and VMware have returned to a per-CPU licensing charge for the product.

So following on from VMware’s u-turn on vRAM licensing, does this mean that Memory reporting and allocation have become less important? 
No, in fact it’s as important as it’s ever been.  Faster CPUs and better information around CPU fragmentation has shifted focus away from CPU performance and onto to virtual memory allocations and performance.

Therefore, getting the most out of your VMware environment is a pre-requisite in these cash strapped days.  Getting your memory allocation wrong can cost you more than just money, it can affect service performance and the subsequent knock on effects can significantly impact the whole enterprise.
From my experience, the common questions around virtual memory are:

·    What’s the difference between Active, Consumed or Granted Memory within a      VMware environment?

·    How much virtual memory should you allocate to a virtual machine and how do you get it just right?

·    What are the benefits and disadvantages of using Memory Limits and Resource Pools?

I’ll be answering these questions and many more in my presentation at CMG, Las Vegas December 3 -7.
I plan to take you on, what I hope, will be a memorable journey.

Explaining in detail how virtual memory is used and why VMware supports memory over-allocation, I’ll help you understand how you can identify your over provisioned VMs, what memory metrics to monitor and how to interpret them.
Finally, I’ll also be providing you with some best practice guidelines on Virtual Memory and some interesting information on using Memory Limits within your VMware Environment.

If you’re going to CMG, Las Vegas make sure you register for this session and for those of you who’ll miss it I’ll be writing it up as a paper and blog series on my return.

Jamie Baker
Principal Consultant

Friday, 16 November 2012

5 APM and Capacity Planning Imperatives for a Virtualized World


Having been involved in Capacity Planning for the better part of two decades, I've watched the environments we manage become more and more complicated even as companies decide to devote less and less staff to such an important function.

 Back in the 90s, we'd frequently make decisions based on utilization of servers and mainframes and would upgrade them when we hit certain thresholds.  We spent no time whatsoever worrying about how optimized the applications were and how well the infrastructure was planned.  Most services ran on the mainframe and those that didn't were very simple client-server applications that were relatively easy to manage.

Today's data center is much different and much more complex.  Virtualized applications, centralized storage, and Cloud Computing make the task of Capacity Planning quite complex as cost savings can be realized when physical resources don't have a lot of excess headroom.  That means, however, that we need the skills and tools that allow us to understand applications from the end user to the data center and know where to optimize those applications in order to get the most bang for the buck.
 
IT operations and capacity planners now must understand and optimize their applications and infrastructure from the end user to the data center.

Metron and Correlsense recognize that Capacity Planning and application performance management (APM) are both key functions that are vital to the smooth operation of the modern data center and these functions must work together to optimize applications and services.  Metron's athene® and Correlsense's Sharepath integrate to bring together important APM and Capacity Planning data in one centralized location for the use of many different groups in the data center.

We'll be discussing what you need to know about capacity management when operating in both physical and virtual environments, how performance monitoring in virtual environments relates to your capacity management goals and what's unique about capacity and performance management for virtualized applications at our joint webinar on November 20.

Why not join us, register now   http://www.metron-athene.com/training/webinars/correlsense.html

Rich Fronheiser
SVP, Strategic Marketing

 

     







  
  
  

 
 

Tuesday, 6 November 2012

Business Transaction Monitoring - every transaction Counts


Modernizing a data center, implementing a private cloud, moving to a public cloud or just managing the daily routine of rolling out new applications are initiatives that can have great impact not only on the IT environment, but on end users and the company’s bottom line.
The dynamic nature of change—and its increasing frequency—make it a significant issue in IT today.

Managing change from an application performance perspective is a key factor for IT success. Likewise, in today’s dynamic environment it is the basis for assuring that end-users are not negatively impacted by critical changes and that service level agreements (SLAs) continue to be met.
Most existing IT management tools do well managing a steady state in environments where today’s system configuration and requirements haven’t changed from yesterday, and will remain the same tomorrow.

The problem is that you most need help from your tools when change is occurring. For example, the only way to diagnose a service level degradation is to find out what has changed and use that information to get to the source of the problem.
Change efforts are always started with positive goals and the best intentions—for example, modernizing a data center to lower costs or to roll out a new application with the aim of increasing revenue.

Your current tools will ensure that your change efforts don’t completely fail and that you will get the data center migrated or the new application will go online.

The nightmare happens when performance issues arise in a new production environment. Legacy applications in a new data center may work, but perhaps they're not performing as well as they did in the previous environment and they aren’t meeting their SLAs. So, the change efforts that started with the best intentions become exercises in finger pointing and blamestorming

IT operations and capacity planners of today must understand and optimize their applications and infrastructure from the end user to the data center.
We’re running a webinar that deals with this topic and looks at how end to end transaction monitoring can provide significant benefits to the business, by understanding the volume, flow and speed of business transactions through the IT services. As well as showing you how the same data can be used to improve the accuracy and presentation of your planning exercises.


Rich Fronheiser
SVP, Strategic Marketing