Friday 27 July 2012

In these days of limited resources and tightened budgets, every IT Service Management (ITSM) activity has to be tuned to meet real demands and provide valuable services.

Capacity Management is usually viewed as a combination of various techniques addressing performance and capacity. The teams typically involved come from various areas such as development, testing, domain architects, systems programmers and others (possibly including capacity management specifically), depending on the nomenclature used in the organization.

There are some ITSM activities, such as the service desk and event management, that are typically viewed as necessary. Many other ITSM activities are viewed as desirable but not always implemented such as demand management and service level management. There are also some ITSM activities that are perceived by many as peripheral or an unnecessary overhead, such as capacity planning, modelling, software performance engineering, performance measurement and performance testing.

Given this situation, it would seem to be the right time to ask which capacity management activities should be done, and when, if at all.

Capacity Planning and performance measurement and testing is certainly getting more challenging with multi-tiered applications, virtualization of many types and cloud computing. Over the last few years, many will say that the cost of adding more hardware is significantly less than the cost associated with measurements, testing and modelling so why bother. As virtualization is becoming more prominent, this attitude seems to be increasing.

It is even viewed as possible to use a virtualization pool to avoid the whole planning process. Advocates of this approach would point out that it is often very hard to get good business transaction measurements and forecasts.

When the business becomes as dynamic as the virtual infrastructure, they feel that the only salvation is rapid reaction instead of methodical planning.

Yet many capacity planners have watched management buy round after round of “cheap” hardware and spend ever more on deluxe solutions for routine applications. The total cost of ownership is nowadays determined as much by software licences and system support (including accommodation and power) as the hardware itself. Furthermore, the performance after such upgrades may well stay the same, that is to say poor, and overall costs skyrocket.

Some form of planning is clearly needed but how do you strike the right balance?

Companies really can’t afford to use the same methodology for a single $5K mini-server as they do for a $5M super-server or mainframe, but what about small servers across the enterprise? Where is the crossover? Is it always the same for every company? Is it always the same within a single company?

Does everything merit the same level of Service Management? What happened to demand management? Catch more on Monday.....

Adam Grummitt
Distinguished Engineer and Author ‘Capacity Management – A Practitioner Guide’

http://www.itgovernance.co.uk/products/3077

A selection of more white papers are available for free download http://www.metron-athene.com/_downloads/index.html

No comments:

Post a Comment