Friday 31 August 2012

vSphere vs Hyper-V Performance showdown - memory management features


I’ll continue the background on vSphere and Hyper-V by looking at some of the memory management features available in each.
 
vSphere memory management features
 



Transparent page sharing – a technique that vSphere uses to make redundant copies of pages and basically eliminate those pages.

Ballooning  – which is in essence borrowing memory. You can see from the diagram above that the balloon essentially inflates and takes memory away from a virtual guest, making it available to another guest.

Memory compressionallows pages to be swapped to a cache in memory, rather than disk.  When placed into the cache they are compressed, thus allowing pages in memory to be freed up.

 Paging – If memory gets tight it will page out to disk and make more memory available.

 Hyper-V memory management feature

Hyper-V is quite a bit different in that you do not have as many memory management features, the main feature that you have is dynamic memory.
 
 
 

Dynamic memory for enlightened Windows VMs – it will dynamically allocate memory to the windows virtual machines (not available for Linux) with hooks into the Operating System.
 
Next week I’ll share a list of what I consider to be the key performance metrics that I'd advise you to look at for capacity and performance management, the challenges in benchmarking virtual environments and explain my test environment and testing methods when comparing vSphere with Hyper-V.
 
Dale Feiste
Consultant




Wednesday 29 August 2012

vSphere vs Hyper-V performance showdown


The inspiration for this came about from collecting data in both of these environments. I had athene® bringing in data from both vSphere and Hyper-V and thought it would be interesting to compare these two platforms by running some workloads against them.

For those of you who use these virtualized environments, you know there are many metrics that you can collect from these two environments.

All the metrics that I have collected are the standard types of metrics that you would use for on-going capacity management and were collected from the hypervisor.

Architecture

I’ll start by taking a look at the architecture:

vSphere for ESXi version 5
 
VMware advertises this as having a small hypervisor footprint, and it does.
Hyper-V
 
Microsoft is quite a bit different in the way that it is constructed, when you install the Hyper-V role with Windows 2008 R2 server, it creates what Microsoft terminology refers to as a partition.

Microsoft refers to two different types of partitions:
Enlightened – has an integration services pack installed which provides extra features.
Unenlightened –a virtual machine that is running on its own with no special features.
The equivalent of the integration packs for VMware is VMware tools.
Having these integration packs or VMware tools installed is highly desirable, unless there is some particular reason that you should run without them, as they help you to realize the full benefits of virtualization.
 
Windows Hyper-V runs with a larger footprint.
 
On Friday I'll be looking at the memory management features of both and later in this series will examine how they perform under some tests that I ran, so keep following.
 
Dale Feiste
Consultant
 

 
 

 

Friday 17 August 2012

Trend 5 - ITIL methodology

ITIL provides a process methodology where everyone within the organization is speaking the same language, everybody understands what the framework of IT is going to be.

For capacity management this is very important, as we are looking at 3 different levels again business, service and component. 
Business - understanding how we handle business capacity management, how is the business changing going forward?

Service - The services have to underpin the business, so when the business changes how are we going to design and adapt our services to meet those needs?
Component – those services are underpinned by the components and the technology, so how are we going to provide the correct technology to provide all of the services and meet the SLA’s that the business requires?

New delivery platforms Software as a Service, Infrastructure as a Service and Platform as a Service will continue to have an impact on how we put our enterprise together to meet business needs.
Organizations should decide on a best practice for capacity management and from there do a gap analysis to determine what they do well and what they aren’t doing well. This highlights areas where improvement is needed.




Being able to take a look at a transaction in a pictorial way is useful - you can see where in those particular transactions you have performance issues.
Having a capacity management information store - where you are capturing and storing performance and capacity data on a historical basis is essential, this then enables you to do trending, modeling, alerting and analysis.

So to summarize my series on 5 trends in capacity management, to manage these trends you must:
      ·         Understand your entire IT infrastructure

      ·         Monitor every hop of every transaction


      ·         Gain valuable metrics into your end user’s experience

You can accomplish this by using SharePath to fully track transactions and athene® to optimize Capacity Management. Enabling athene® to use this valuable data drives better capacity management decisions and predictions.
More information on:
Rich Fronheiser
VP, Strategic Marketing


Wednesday 15 August 2012

Trend 4 - Virtualized applications. Be paranoid, be smart, be lazy

As I mentioned on Monday to manage these crazy, dynamic and complex applications you’re going to have to be paranoid, be smart and be lazy.

Here’s what I mean by that.
Be Paranoid = Watch Every Transaction from Every User

It’s important that we know.
If at 10.25 in the morning the end user reports that an application is running slow wouldn’t it be great to go back over our information for that time and then be able to drill down from there? Does this mean that you should be actively looking at every transaction every minute of every day? Absolutely not, have an automated tool record the data for you so that you can access that information when there’s a problem or be alerted of a threshold breach in advance and ward off problems.
Be Smart

We want to look across space ie: geographical locations, server to server, maybe the problems are only happening at one location and if so, why? Is there one location that has historically had more problems than others?

It’s also important to look across time, are there times in the day when certain types of transactions are happening that are causing performance issues? Are there times of the day when we are exceeding service levels?



Be lazy
By being lazy we want to try and automate this process as much as possible.

It’s impossible to sit and look at every transaction real time in a command centre or war room. We need to have service level agreements (SLA’s) in place and those SLA’s need to be as detailed as possible. They can be end to end SLA’s or they can be SLA’s for time spent on a database server, time spent at a middleware server or even time spent at a particular segment of the network. Set up a message to receive an SMS when there is a problem, something like SharePath can do this for you.

You can catch my fifth and final trend on Friday
 
Rich Fronheiser
VP, Strategic Marketing

Monday 13 August 2012

Trend 4 - Virtualized applications

Let’s look at Trend 4, Virtualized applications

Increasing complexity of application environments – a variety of which are shown below.

       Web servers, DB servers, app servers, identity servers…

       UNIX, Linux, Windows, J2EE, and .NET …

       Systems, apps, storage, switches, accelerators, …

       LAN, WAN, VLAN, internal, external…

Different platforms, different storage types, switches, load balancers, applications and the results of all these are going to vary by time, day, function and location.
Organizations have multiple tools – few are integrated or provide real end-user insight. How is the real end user experience on your applications?

More complex integrations – the integration of applications and services have become more complex so now we’re looking at relationships with users, customers, partners and suppliers whether local or international.
Be paranoid, be smart, be lazy, what do I mean by that? Catch up with me on Wednesday to find out
Rich Fronheiser
VP, Strategic Marketing





Friday 10 August 2012

Trend 3 - Cloud-based environments

Let’s take a look at the third trend Cloud-based environments

Cloud computing continues to grow - we have to assume a multi-tenant model, capacity will very rarely be dedicated to finite groups of users or processes. It is very common now that a cloud is going to service multiple organizations, multiple users, multiple services and multiple applications.

“Cloud-bursting”- the ability to provide additional resource at short notice, it can be an efficient option by providing temporary capacity needs but the downside to that is that it can prove very expensive to rely on ‘cloud-bursting’ as a regular option instead of having good management of your resources.
Capacity planners will need to update their skills – nowadays environments are far more complex and there is a need for capacity planners to keep up to date with modern technologies.

Capacity managing and monitoring tools are improving to deal with this trend – Vendors are adjusting their software to keep up with trends.Our own athene® capacity management software has evolved, with Integrator, to allow you to capture, store, report, trend and alert on data from physical, virtual and mainframe environments with the added advantage of bringing in data from “hard to reach” sources.

It’s important to monitor your infrastructure from end to end and watch every transaction. Watching every transaction requires that you are able to look at all transactions, likewise it’s important to be able to get information from each piece of that infrastructure, the servers, the networks, storage and mainframe to be able to piece together how things are performing.
When things don’t perform well you have a lot of information to go back on and help you to determine why you are having performance issues.
I’ll be taking a look at Trend 4 on Monday, enjoy your weekend.
Rich Fronheiser
VP, Strategic Marketing

Wednesday 8 August 2012

Trend 2 - Physical to virtual environments


The second trend in capacity management is physical to virtual environments.
Companies are continuing to migrate from Physical to Virtual (P2V) – virtual environments are much more complex such as VMware, AIX, LPAR’s so it has become much more difficult to manage, monitor and understand what capacity means in these environments. Certainly the level of detail that we are looking at and where we’re focusing our attention is different in a virtual environment. We’re going to look at hosts, we’re going to look at virtual machines that run on those hosts but what we might be most concerned with is that we have adequate capacity to cope with what is going on today. Typically, applications and services are sharing those resources so our focus may be whether we have enough capacity to handle today, the next 3 months, the next 6 months – we need to be looking at things from a higher level as well as a lower level.

Cost of unexpected errors during migration can be crippling – migrations from a physical to virtual environment need to be well planned. The need for good planning is essential, if you don’t have enough capacity going in to production the effect could be felt by many different applications and services which are running in that environment, whereas years ago if you made a mistake the error would affect one application or service only. It is very rare that a virtual environment isn’t managing different services for different areas in the organization.
The key is to understand your complete IT infrastructure during a migration so it’s key to monitor your critical application performance in that physical environment. Are you already monitoring it? Are you capturing resource utilization numbers with a tool like athene®? Are you looking at existing production transactions with a tool like SharePath?

It’s very important that you do that, understanding the end user experience is crucial. It’s meaningless to me if you tell me that the CPU is 90% utilized as I really wouldn’t know without any context whether that is a good thing or a bad thing. Only with understanding the end user experience can we put context in to those resource utilization and capacity and performance numbers. Verifying the performance in the new virtual environment is absolutely critical, we don’t want to go from a 5 second response time in your physical environment to providing an 8 second response time in your virtual environment, no-one would be happy with that.

You want to make sure that you are meeting your Service Level Agreements (SLA’s) now, you want to plan that you meet your SLA’s in the new infrastructure and you need to go back and verify and compare the performance that you are getting in the new environment with what you have had in the physical environment in the past.
It doesn’t matter where your users are you need to know what kind of experience they are getting.

Rich Fronheiser
VP, Strategic Marketing


Monday 6 August 2012

Trend 1- IT cost and Service Level Agreements

Let’s start by looking at the first trend, IT cost and Service Level Agreements.

IT professionals must now conduct cost value analysis - It’s not feasible today to provide an unlimited amount of hardware or to provide the absolutely best possible service to every one of your clients, services and applications. Now IT professionals must conduct a cost value analysis in order to ensure that they’re providing a service that meets the service level agreement but optimizes the amount of money that they have to spend to achieve this.
Facilities and energy costs will consume more budget - Facility and energy costs have become more expensive and continue to consume a good deal of your budget.

New application environments typically make service level management more challenging - We frequently see application environments now where we’re looking at 30-40 servers to provide a single service. When you’re looking at that many pieces of hardware, all the network segments between them, all the centralized storage and the virtualization that goes on in that environment., looking at individual transactions and trying to determine what is going on, why it isn’t meeting your service level agreement (SLA) and solving the problem so that it does meet your SLA is a very challenging process.
To manage Service Level Agreements:

Be proactive – you want to try to provide services that meet your SLA’s so you have to be proactive. Plan from the beginning going forward, knowing how the workloads are going to change over time.
Go beyond load testing and look at desktop response times - load testing lets you know whether a transaction completes but the key thing to look at is actual production transactions, showing you exactly how long a transaction takes to complete from a user’s desktop.

Keep an eye on your worst transactions – you spend the majority of your time worrying about and looking at about 10% of your environment. Keep an eye on your most challenging and most important transactions and you’ll be more successful at meeting your SLA’s.
My final recommendation here is to:

Keep an eye on your top 10 killers
The top 10 killers in my mind aren’t necessarily those that aren’t performing well, they are the ones that could potentially be killers, those that are absolutely crucial to your business.

So how do you keep an eye on transactions?



Here's a typical report form SharePath, our application transaction software. It allows you to monitor end to end response times and see where time is being spent during a transaction

These are not synthetic transactions these are real user experience production times and they allow you to see each hop of a transaction, from server to server, network to network so it’s easy to identify exactly where your transactions are falling down.
You can’t monitor every single transaction within your organization as there simply isn’t the time or resource, so choose your top transactions, the ones that matter and monitor those.

Trend 2 on Wednesday

Rich Fronheiser
VP, Strategic Marketing

Friday 3 August 2012

Five trends in capacity management


When we look at capacity management today there are a lot of challenges out there. We typically look at capacity management as 3 layers: the business layer, the service layer and the component layer

Planning ahead to meet business requirements and SLAs while managing:

Business - there is constant change with mergers and acquisitions taking place, lines of business being dropped or changed, changes to keep up with competitors.
Service - we’re looking at new ways of providing services, today we have multi-tiered, complex, service-oriented architectures to contend with.
Component – With the advent of virtualization and cloud we now have very dynamic environments where one day a service will be provided by one piece of physical hardware and the next, behind the scenes, it is moved to a different piece of physical hardware.
You have to manage and monitor all of this, how much additional capacity you have, which applications are having or causing problems.
The real goal here is to fulfil the business requirements and meet your service level agreements, both now and in the future.
We’ll start by looking at the first trend on Monday.
Rich Fronheiser
VP, Strategic Marketing