‘It’s all too
much!’ the IT man screams.
Certainly our world is getting more and more complicated with virtualization. On the server side we have web servers, database servers, application servers, identity servers and more. We have UNIX, Linux, Windows, and more, J2EE, .NET and so on. A given application or service touches multiple storage systems, network devices, LANs, WANs and VLANs and might be spread across a mix of public and private Clouds. Terminal and host seems like a distant memory for the older among us.
Certainly our world is getting more and more complicated with virtualization. On the server side we have web servers, database servers, application servers, identity servers and more. We have UNIX, Linux, Windows, and more, J2EE, .NET and so on. A given application or service touches multiple storage systems, network devices, LANs, WANs and VLANs and might be spread across a mix of public and private Clouds. Terminal and host seems like a distant memory for the older among us.
Virtualized
workloads might be, indeed should be ,variable across time. P2V probably meant you looked at workload
profiles to ensure that virtual machines on a given host had applications that
did not have peak processing times that coincided.
All of this
variety needs to be managed from a capacity perspective. As ever, we are not managing a fixed entity,
indeed entities are much less fixed than in our past. Workloads can be dynamically switched across
hosts for performance gains, new hosts are quickly and easily configured and
workloads moved around to ensure optimum location. The business keeps changing, old services
replaced with new, merger and acquisition, organic business growth or shrinkage. Mobile computing and portable devices add new
management challenges and make our users and access points even harder to pin
down.
To cope with
all this diversity and change, a capacity manager needs to be both more
specific and more general. Being
specific means trying to at least have a view on all critical
transactions. To know which are critical
you need to know about them all, so measuring everything automatically from end
to end is vital. Old fashioned 80/20
rules still apply – 80% of the work is probably accounted for by 20% of the
transactions, so identifying those 20% is critical.
Greater
generality perhaps comes from your tool selection. Most organizations have come to the
virtualized world with a selection of point tools for different environments
and then added further point tools on top for their virtual worlds. Deep dive performance analysis will still
require quality point tools in many or all areas. Reporting and spotting trends that affect capacity
decisions will be much easier however with a solution included that integrates
all the other data sources. Resourcing
decisions are moving back into the user domain again with Cloud options. Having a reporting solution that covers every
aspect of your environment from a capacity perspective is a significant way in
which you can help the users make their resourcing decisions. It also gives you
the means of providing good advice and guidance as input to their decision
making.
Remember in
2012 the key for capacity managers will be to provide value, not just optimize
costs. This value can and will need to
be across the entirety of the complex environment in which virtual applications
exist. Put together the right
combination of end to end perspective, deep dive and ‘across the board’
integrated reporting to ensure you can provide the most value for your
business.
We’ll finish on Monday by considering the
importance of having the right capacity process in place to underpin your 2012
and onwards capacity management activities.
Andrew
Smith
Chief Sales &
Marketing Officer
No comments:
Post a Comment