Friday 23 December 2011

Holiday Greetings from Paul Malton – CEO


This has been a good year for Metron-Athene.  We have expanded our team of professional people and focussed on feeding back the ideas from the customers into producing our expanding range of Capacity Management tools.  The use of agile development with its fortnightly goal setting has helped to achieve a quicker turnaround in a fast changing environment.  The comments, by customers, on our new offerings continue to be complimentary.

The major deliverable of this approach this year has been our Integrator module.  This enables any data to be imported into the capacity database at the heart of our Athene software.  That data is then available along with any other data captured using Athene’s agents, agentless or framework interface capture modules for use by the extensive analysis, reporting and planning facilities that Athene provides.  Many Integrator ‘Capture Packs’ already exist for areas such as network, storage, application performance monitoring and more.  New ones can be quickly and easily created by Metron or Athene users via the easy-to-use Integrator browser interface.  As with so much in modern life the key is flexibility – Integrator adds to the already wide range of Athene data capture options to enable all capacity data to be stored in and used from one database with one toolset.

The economic background has been challenging and there seems to be no enthusiasm from the political leaders.  In our market what is foremost is the improvement of the usability of information at a reducing cost.   This almost defines Athene.  When companies are facing tough challenges a cost saving product is very popular. 

Thank you to all our staff, customers and partners who work with us.   I hope that you have a very happy holiday and return refreshed and enthusiastic.

Wednesday 14 December 2011

Forecasting – When Modeling is not the only Choice

When most organizations are tasked with forecasting changes within their IT environment, many immediately think they have to create models.

There are two types of modeling techniques that can be used when forecasting changes.

The first being Analytical modeling which basically looks at the workloads within an environment and measures the arrival of work into the server. This technique allows an organization to very quickly see the impact of business changes.

The other modeling, but time consuming, technique is Simulation modeling. Simulation modeling basically revolves around duplicating the existing environment and running synthetic or real world transactions at a certain speed to determine the impact of business changes.

Many times and for a number of areas, modeling does not fit into the forecasting of certain components. That is when using trending or stacked bar charts provides the largest value to understand the forecasting of those business changes. These are such as, but not limited to, consolidating servers, looking at the headroom available within a VMware cluster and disk space growth.

Key items many Capacity Managers want to know are:

  • Do I need to forecast and manage this server or environment against peaks or averages?
  • When are models appropriate for forecasting?
  • When are trend and stacked bar charts appropriate forecasting?
  • How to show both peaks and averages for a metric on the same chart
I'll be running a webinar on January 12  with a discussion of forecasting techniques which will answer these questions and more.

In the meantime don't miss our free Capacity Management for SAN attached storage webinar,running tomorrow, which discusses ways to assist the storage administrator in the complex task of managing SAN attached storage.

To register for these or any of our webinars http://www.metron-athene.com/training/webinars/webinar_summaries.html
Charles Johnson
Principal Consultant

Friday 9 December 2011

Make sure that you and your customers have a Happy Holiday.

Yes it’s that time of the year again.  It seems almost everybody is trawling the mall in search of gifts for their loved ones. If they’re not at the mall then they’ll be on the internet, helping retailers to have another record year for online shopping. 

All of this means one thing to IT - more capacity.

As capacity planners we come across some interesting seasonal patterns but nothing seems to strike fear into the IT department as much as the holidays.

The good news is that you have had 12 months to plan for it….

So what interesting things happen at this time of year?

*New Phones!  That’s right, at the top of our gift list are mobile phones.  And what happens?  They sit beneath our trees (or in our cupboards) until suddenly, on the 25th December, there is a frenzy of activity and everyone wants their phones to work.  Imagine the massive spike for services that support the registration of new sim cards.

* Retail.  This is the month where money is made and boy does that generate a whole load of transactions!  Whether the sale is online or in store the backend of the business feels the strain. You’d better hope that you got it right or those extra staff in the warehouse will be bored!

*Web Pages.  The growth of online retail means the web will be busier than ever this year, what can be better than bargain hunting from the comfort of your own home? Just when you think it’s all over, people log on to spend those vouchers, or look up instructions, or just to avoid playing family games.  Why can’t they just leave IT alone? It’s the holidays!  If you want to avoid unhappy customers, that web page has to be up and running well!

*00:00:00 1st Jan.  No, your cell will not connect, and your text will be very late.  The spike in calls and texts at this moment is so huge that the only plan is to protect the core services from total failure.

Those people out there experienced in capacity planning will have plenty of data from last year at their disposal and will know what they expect to happen, what headroom they have and what they can cope with (and when to pull up the drawbridge).

What happens if you haven’t planned for it and are now reading this and wondering what to do?

 *  Start monitoring your servers now!  Information is power.  (We have athene® Virtual Appliance that you can download and trial for free http://www.metron-athene.com/_downloads/_virtual_appliance/index.html)

Find out what the business expects.  Get on the phone and start talking to your business units about what they expect your customers to do.

* Model it.  Whether that’s simple aggregation and “what if” trends on CPU/Memory/IO or analytical modelling, set about identifying what the expected business workload will do to your servers and transaction response times.  Don’t just model what is expected either. Load up the servers in the model until you know what they can cope with and feed back to the business how much extra they can go over before it will become a problem.

* Contact us.  If you have no plan and you’re wondering where to start, it’s not too late. We’ll be happy to work with you, helping you to see if you can cope with the Holidays this year.

Remember capacity management is not just for Christmas!  Happy Holidays.


Phil Bell
Consultant

Monday 5 December 2011

Metron at Gartner - Bringing data and information together

We've been dealing with capacity management and an IT landscape that has been rapidly increasing in complexity for the last quarter-century. Our software, consulting and training has helped many companies, in all sectors, manage their IT capacity with a focus now on managing complex, cloud-based environments. 
It will be solutions to this very topic that Metron will be discussing at the Gartner Data Center conference taking place in Las Vegas December 5-8.
At the conference we'll be on hand at Booth 12 to talk visitors through our proposed 3-pronged approach:

athene®, for infrastructure capacity management, data storage and reporting needs
The capture, storage, analysis, and reporting of capacity data is crucial for customers to accurately manage the capacity of resources, services, and applications.  Both cloud providers and cloud customers can benefit from timely, accurate capacity reporting and analysis and can accurately forecast future resource requirements that will continue to satisfy agreed-upon service levels.

athene® also has the capability to report, trend, and alert on data that comes from external sources, such as service level data, transaction data, and any time-series file containing any data valuable to capacity management efforts.

SharePath, for the capture of end-to-end transaction data
If a cloud customer doesn’t know or, worse, can’t quantify the amount of time it takes for transactions to complete, it’s impossible to properly police or enforce service level agreements.  SharePath allows for the capture of end-to-end transaction response time data for all real end-user transactions.  This data can then easily and automatically be imported into athene® for reporting, trending and alerting purposes.  The automation means that a customer won’t have to spend time carefully looking over reported response times for thousands or millions of transactions to ensure SLA compliance.

Integrator Capture Packs, for the capture, storage, reporting, trending, and alerting of data from “hard to reach” data sources
Every company has tools and platforms that store vital numeric data in log files or other sorts of time-series text files.  athene®, via its Integrator, can import any time-series numeric data for reporting, trending, and alerting purposes.

Upon client request and as part of consulting engagements, we have been very active in developing Integrator ‘Capture Packs’ – pre-defined templates that can import popular types of data into the athene® database. 
Integrator Capture Packs have been developed for data coming from the HP Performance Manager, Hyper-V, storage devices, network devices, Apache and IIS web logs, iSeries, and many other data sources.  Best of all, Metron or the athene® user can quickly create custom Capture Packs for their own business data, enabling capacity planning from a business perspective.

Companies have long been looking for a cohesive end to end capacity management solution. Our trilogy brings data and information together enabling Infrastructure planning to move out of the Infrastructure and become end to end planning from the business and user perspective.
We believe our customers are going to love the flexibility that our solutions bring.
If you're unable to catch up with us at Gartner and would like to know more visit our website or contact us.
Andrew Smith
Chief Sales & Marketing Officer






Wednesday 30 November 2011

Capacity management – dates for your diary!


As we’ve re-iterated across many of our blogs the job of the Capacity Manager and those in capacity management roles has got increasingly more difficult over the years due to the diversity of the enterprise.

We recognise this fact and though there aren’t many things in life that you can get for free we continue to run our series of free educational webinars focussed solely on capacity management. Increasingly high numbers of attendees, from across the globe, confirm that our topics are those that are of most interest to you at the moment.

We have a great package of events coming up over the next few months.

In December we’re running 3 events, kicking off tomorrow with Cloud Computing and Capacity Management, a look at what it will be realistic for the capacity manager to provide to the business in this complex world of interacting services.

This is followed by an athene® Live! Event, on December 13 which showcases how we address capacity management for Cloud in practice with our own athene® software, Integrator data integration solution and SharePath end to end transaction monitoring software.

On December 15 we’re back again with Capacity Management for SAN Attached Storage where we’ll be discussing ways to assist the storage administrator in the complex task of managing SAN attached storage.

Our New Year agenda is already set and will be looking at Modeling, vSphere5 vs Hyper-V and Green IT amongst other topics. Check it out and register to come along to our events http://www.metron-athene.com/training/webinars/webinar_summaries.html

If you have your own ideas for capacity management topics you'd like us to cover, drop me an email at Andrew.smith@metron-athene.com and I'm sure we can put together a session for you.

I look forward to meeting you at our webinars.

Andrew Smith
Chief Sales & marketing Officer

Tuesday 29 November 2011

The key to making Cloud Computing work within an enterprise is an effective Capacity Management process.


Cloud Computing is an area that has garnered much attention in the IT industry and it looks as though all organizations will have some form of Cloud implementation over the next few years.

What Cloud Computing is allowing organizations to do is manage their resources in a way in which the infrastructure does not continually spiral in growth. This could be a mix of private and public Cloud, and for public cloud it could involve a variety of external providers.

The main reason Cloud Computing works is because of virtualization both on the x86 and Unix/Linux platforms.  The key to making Cloud Computing work within an enterprise is an effective Capacity Management process. 

An enterprise should have a Dashboard / Portal along with regularly scheduled reports that not only the IT staff, but also the Business Owners can look at to assess their environments. 

It's critical to keep the Business Owners apprised of their resource usage as it could change dynamically as other factors in the environment change.   The types of reports you would produce for a Private, Community or Hybrid Cloud architecture may have similar qualities but need to provide a focus to the particular resource used. 

 
What is without doubt, is that the range of environments you will have to manage will become ever more complex. What you can and should do in terms of capacity management will vary with the nature of your own implementation.

Join me on December 1 at a webinar and find out what it will be realistic for the capacity manager to provide to the business in this complex world of interacting services, and how we can help you achieve it.



Charles Johnson,
Principal Consultant

Wednesday 23 November 2011

Cloud Computing and Capacity Management


One significant change with the move to Cloud based systems is that capacity management becomes a much more strategic activity than in the past. 

 Analyst groups such as Gartner are promoting this evolution and it is further supported by initiatives such as the ITIL® v3 refresh.  Rather than a purely resource level task, capacity management now needs to be an integral part of how the business chooses what is the best solution, for example between private and public Cloud.

What you buy across the cloud could vary from simple Infrastructure As A Service (IaaS) such as processing power and disk storage through to full Software As A Service (SaaS) offerings like salesforce.com, where the provider delivers you hardware, application and more importantly, is responsible for service quality.

For anything other than SaaS, the need to use capacity management techniques to plan requirements in advance will be ever more important.  Buying something in advance is cheaper than buying at the last minute. Emergency buying of Cloud resource (cloud bursts) might be easy to do but it is likely to be prohibitively expensive over time.

With service potentially coming from a variety of internal and external sources, guaranteeing service quality becomes both more difficult yet more necessary. 

What is without doubt is that the range of environments you will have to manage will become ever more complex and what you can and should do in terms of capacity management will vary with the nature of your own implementation.

From a capacity perspective you won’t be able to measure everything you want in the Cloud so measure what you can, control what you can and don’t worry about the rest.  Tools and processes that support this open approach such as those provided by us are an essential.

Find out what it will be realistic for the capacity manager to provide to the business in this complex world of interacting services by coming along to our free webinar on December 1st.



Andrew Smith
Chief Sales and Marketing Officer

Wednesday 26 October 2011

Managing IT Capacity – Make it lean and green


No one now doubts the wisdom of going 'green' - reducing the environmental impact of IT on the world. Before long the IT industry will have overtaken the airline industry as a polluter of the environment.

Most organizations now have initiatives underway to reduce their carbon footprint, adopting more responsible policies to lessen any detrimental impact of IT on the world.

Strategies range from the quick and simple such as 'think of the environment before printing this e-mail' to the longer term and more complex such as virtualizing your server estate.

New technologies can go a long way to helping Companies meet their Green initiatives but only if they are effectively managed, otherwise the benefits to both the Company and the environment are squandered.

Capacity management has a role to play in helping you ensure you can implement green strategies that optimize your infrastructure and maximize the green savings to be made

Failure to implement sound, sustainable strategies will result in spiralling costs or poorly performing infrastructure with the inevitable impact on your business goals.

We're all trying to go green in an IT context that is becoming ever more complex. From a server perspective, everyone now accepts that by virtualizing our vast ranks of under-utilized servers, we can do more with less: reduce power consumption, reduce data center space, reduce air conditioning required and more. This promises a 'double bubble' of benefit: lower costs and lower carbon footprint. Fewer servers means less staff time required to manage them. Your business benefits as you save time, save energy and save money.

Let’s face it ‘Managed Capacity’ sounds much more like an approach that fits with a Green agenda than ‘Unmanaged Capacity’.

We’re going to be speaking at the Green IT Expo in London on November 1st as we’re passionate about managing IT resources to ensure you deploy the capacity you need, when you need it. Minimising money, people and carbon costs.



Andrew Smith
Chief Sales & Marketing Officer


Monday 24 October 2011

Cloud Computing - Complexity, Cost, and Capacity Management

Computer systems have always been complex.  The range of useful work computers can do is extended by layering complexity on top of the basic machine.  Current “Cloud Computing” capabilities are no exception.  Regardless of complexity, the costs of systems must be understood and reported to enable business planning and management.

So what are these perspectives that need to be understood, reported on and normalized to enable comparison and calculation of unit costs?  The business likes to look at total costs, irrespective of how their service is provided.  They are right to do this – what happens under the covers to deliver that service shouldn’t be their concern.  They just want to know that the service level they require is being provided and what that costs per business transaction or process.

On the systems side, it used to be relatively simple.  Internal systems were the norm.  We had to account for costs of hardware, software, floor space, power, air conditioning, ancillary costs such as insurance and of course, staff costs.  As applications and services became more interlinked and disparate in implementation, it became ever harder to compare and calculate costs for a particular service delivered to users.

Outsourcing and now the Cloud have added yet more levels of complexity.  On one level it seems simple: we pay a cost for an outsourced provision (application, hardware, complete data center or whatever).  In practice it becomes ever more difficult to isolate costs.  Service provision from outside our organization is often offered at different tiers of quality (Gold, Silver, Bronze etc).  These have different service levels, and different levels of provision, for example base units of provision and overage costs that vary and make direct comparison awkward.

Increasingly the model is to mix all of these modes of service provision, for example hybrid Cloud implementations featuring internal and external Cloud provision plus internal services all combined to deliver what the user needs.

Each facet of systems use can be monitored and accounted for in terms of resource utilization, and ultimately, dollar costs.  However, overly detailed data quickly adds volume and cost, becomes unwieldy, and delays analysis and reporting while overly simplified data weakens analysis and adversely impacts the quality of decision support.  The points of monitor and level of detail for data to be collected is driven by considerations of trade-offs between cost, utility, and performance and are highly detailed and dynamic.  Frequently, though, data collection is minimalized and aggregated to a level which obscures the level of detail needed to make some decisions.  For example, cpu metrics aggregated to 5 minute periods and suitable for capacity planning are not very useful to understand cpu resource consumption for individual transactions, a performance engineering concern.

A distinction perhaps needs to be made between different types of costs.  We might need to move towards calculating regular on-going fixed costs for users, supplemented by variable costs based on changing circumstances.  To my mind this is a little like having the running costs of your car covered by a general agreement (free servicing for 3 years) with those qualifying criteria any insurance business likes to slip in (assumes no more than 20,000 miles per year average motoring, standard personal use, does not include consumables such as tires, wiper blades).  If we go outside the qualifying criteria, we have to pay for individual issues to be fixed. 
Cloud in particular lends itself to costing IT services based on a fixed charge plus variable costs dependent on usage.

Going back to those complex systems - we need to ensure we normalize our view of services across this complex modern IT model, a fancy way of saying we must compare apples with apples.  The key is being able to define a transaction from a business perspective and relate it to the IT processing that underlies that definition. 
Application Transaction Management tools such as the Sharepath software distributed by Metron enable you to get this transaction visibility across diverse infrastructures, internal and external. 
Capacity Management data capture and integration tools like Metron’s Athene then allow you to relate underpinning resource metrics to those transactions, at least for systems where you are able or allowed to measure those resources. 
This brings us to a last key point about external providers, outsourcers or Cloud providers.  You need to ensure that they provide you with that resource level data for their systems or the ability for you to access your own systems and get that data.  Once you have the numbers, calculating costs per transaction is just math.  Without the numbers, you can’t calculate the cost. 
Business increasingly wants to see the costs of their services, so make sure you demand  access to the information you need from your suppliers, as well as from your own organization. Then you can put together a comprehensive and realistic view of costs for services in today’s multi-tiered internal and external application world.

GE Guentzel
Consultanthttp://www.metron-athene.com/

Friday 21 October 2011

Often capacity managers feel they’re in a never ending battle to show their value


 You can be successful 99% of the time with your predictions and analysis, but that 1% of the time you’re not, business users begin to doubt you.  What do you do in this circumstance besides throw paper at them? 

One thing that might help is to have a defined capacity management process. 

 A well-defined process leads to gaining the confidence of the business users where your projections are concerned.  This allows you to fall back on the information, show them the process you are following and convince them that it is not hit and miss.  That 1% is typically always down to issues with the quality of data you have to work with, timeliness of receiving that data and more. Having a defined, visible process shows that it is not the process itself that is at fault.  Many times the reaction from the business users is because the unexpected, in many cases, will cause an increase in cost.

 Along with the challenges we’re already facing in the community, the business users are now asking how “our” transactions are responding.  As you begin to ask them the question of “what do you mean by a transaction”, you get “when the person fills out a form on the screen and hits enter, how do we know everything is working fine?  Now we all know what it means when they say “fine”.  It means I don’t want to hear from my people that their applications are running slow and they can’t get their work done.  This is where an additional item to your capacity management process is necessary, that addition being Application Transaction Monitoring.

Application Transaction Monitoring gives you the ability to not only monitor the capacity of your servers within the enterprise; it enables you to determine what is happening from the application point of view. This monitoring is more critical these days as the setup of the IT environments become ever more complex. 

What Application Transaction Monitoring brings to the table is the ability to monitor an application transaction every step of the way throughout its lifecycle.  The ability to take this information and marry it to the server data allows you to gain both breadth and depth of the IT environment. 

Now back to the question that I began with, what do you do when the business users say they don’t have confidence in your information?  Show them you have a well-defined and visible capacity management process.  Show them that this process has both depth, from business data down to technical resource levels, and breadth, covering across an application from their perspective.  By having both of these, you are going to be able to show them both sets of information and talk them down off the ledge.  Any issues should move from ‘you got the numbers wrong’ to ‘well, we can see how your process should work, what input do you need to make it work successfully for us?’

Are we as capacity managers going to be 100 % accurate? No, but by showing that we are gathering and analyzing all the information that is available, they will walk away with an understanding that you have given the best results possible. 

Now when they leave your office, you can throw the paper at the door.

Charles Johnson
Principal Consultant

Wednesday 19 October 2011

VMware vSphere Performance Management Challenges and Best Practices

 
Ever wanted to know what affects CPU Performance in vSphere? What a World is? How and why ESX uses memory reclamation techniques? Or why it is recommended to install VMware Tools?


I’m running a free to attend webinar tomorrow which will identify and highlight the key performance management challenges within a vSphere environment, whilst providing best practice guidelines, key metrics for monitoring and recommendations for implementation.
I’ll be focusing on the key resource areas within a virtualized environment, such as CPU, Memory, Storage and Network and also provide an introduction into Virtualization Performance challenges and some further information on VM performance and virtualizing applications.

  • Challenges of x86 virtualization
    • Four levels of privilege (Ring 0 – 3)
    • Hardware – Intel VT-x and AMD-V
    • Software – Binary Translation
    • Memory Management – MMU
    • Default Monitor Mode
  • CPU Performance Management
    • What is a World?
    • CPU Scheduling – why is it important to understand how it works?
  • SMP and Ready Time
    • NUMA Aware
    • What affects CPU Performance?
    • Host CPU Saturation?
  • Causes and resolutions
    • Increasing VM efficiency
  • Timer interrupts and large memory pages
    • ESX Host pCPU0 High Utilization – why is this bad?
  • Memory Performance Management
    • Memory reclamation – how and why?
    • What do I monitor and what does it mean?
    • Troubleshooting tips
    • vSwp file placement guidelines
  • Networking
    • Reducing CPU load using TCP Off-Load and Jumbo Frames
    • What is NetQueue and how will it benefit me?
    • What to monitor to identify Network Performance Problems
  • Storage
    • Setting the correct LUN Queue Depth
    • Key metrics what to monitor
    • Identifying key factors of Storage Response Time
    • Overview of Best Practices
  • Virtual Machine
    • Selecting the right guest operating system – why does this matter?
    • VM Timekeeping
    • Benefits of installing VMware Tools
    • SMP – Only when required
    • NUMA Server Considerations
  • Applications
    • Can my application be virtualized?

Why not come along and discover these and many more VMware vSphere performance management challenges and best practices http://www.metron-athene.com/training/webinars/index.html

Jamie Baker
Principal Consultant




Friday 23 September 2011

Networking for Capacity and Performance Managers

Apologies to Rich for interrupting his great blog series, he’ll be back on Monday, but I wanted to update you with some key information.

Information and keeping up to date with current trends is the life blood of the teams responsible for Capacity and Performance management and it’s not often that this comes for free.

Those following the UKCMG regularly will no doubt have noticed the number of events has increased and includes this year's free forum event at Ditton Park, Slough. 

UKCMG are a great source of information for Capacity and Performance managers in all areas of IT, giving them the chance to network with like-minded individuals and this event is definitely worth a look.

UKCMG have also announced the addition of a Regional Event in the South West, it will be hosted by our friends at Everything Everywhere on the 4th October 2011 in Aztec West, Bristol. 

It promises to be another content rich event with papers ranging from a user paper on "Forecasting in a WSOA environment" by Jim Dodd from Everything Everywhere and our own Jamie Baker, providing an educational session on "vSphere: Managing the internal storm".

Further information on the event including directions and the agenda, are available at www.ukcmg.org.uk

I’ll certainly be attending both events and hope to see you there.


Rob Ford

Principal Consultant

Thursday 1 September 2011

Can you really justify capacity management? – 5/5

To date we’ve considered justifying capacity management (CM)from a financial perspective.

There has been consideration of the costs involved with implementing a new CM software product or process.  In contrast we looked at the savings that could accrue, splitting them into ‘hard’ or actual cash savings and ‘soft’ savings where there was a benefit to the business that was not necessarily an increase in revenue or reduction in costs.
Justifying this to a CIO usually involves putting this together into an equation that forms a return on investment (ROI) calculation. 

I stated this equation in my first blog as:
(Savings from CM software) – (Money spent on CM software) > zero, pretty quickly

These days, ‘zero, pretty quickly’ typically means less than a year.  So, if there are $2m savings and the costs are $1m, CM looks like a good deal.

One must consider the time factor however.  If buying new software, software costs are incurred on day one.  Maintenance costs are monthly throughout the year.  Training and implementation services usually happen according to a schedule over the initial months.  Maintenance costs can increase after year one.  CM and other staff costs are incurred over time and will probably increase over time.  Hardware costs could be one time (purchase) or spread over time (lease or internal chargeback).  Somewhere in the calculation one must include all associated costs, such as hardware and environmental (power, space) costs.
Benefits rarely if ever accrue from day one.  It takes time to set up and derive benefits from new software or processes.  Aside from the difficulties of deriving or getting the business to offer estimates of some of the cost savings that capacity management will bring, one needs to assess how long it will take before your new product or process delivers those savings. 

What I would say is that the more of these subtle considerations are included by the capacity analyst or manager making his case for budget, the more likely the CIO will be to accept the justification and ROI.  CIO’s understand the value of money, that spending deferred now will buy more resource later, that it takes time for benefits to accrue, that even if salaries aren’t saved, freeing people up to do other things has a value over time.  Not including or making reference to these subtleties will make your ROI calculation look simplistic and overly-optimistic – not characteristics favoured by most CIOs.  The time and effort finding out, or even just guessing at some of the more complicated aspects of ROI will pay dividends in getting your case approved.
If all goes to plan and you’ve used a spreadsheet approach to putting the figures together, you should end up with a picture something like this:


Here we see that despite a significant outlay, the CM solution will have paid back its initial investment in six months and produces a return of several hundred percent within two years.

This is a reasonable expectation when implementing our Athene software.  Not just mathematical formulae and pretty graphs then!
After twenty plus years in this business I’m still learning everyday so I’m open to your comments on capacity management and welcome your feedback.

Andrew Smith
Chief Sales & Marketing Officer

Tuesday 30 August 2011

Can you really justify capacity management? – 4/5

So far we’ve considered areas where a new capacity management (CM) product or better CM services can save you money, building the benefit side of a return on investment calculation.  It costs you money to make a change however so let’s take a look at the cost side of the equation. 

In my first blog I outlined areas of cost to consider:

·         Costs of CM software, including on-going maintenance

·         CM team resources needed to use that software

·         Initial training for the new software

·         Implementation and on-going services required

·         Other personnel resources that are needed to help with the implementation and in-going usage of the software
Let’s now look at these in more detail.

If you’re looking to buy a software product to improve your CM, there will be software costs and on-going maintenance.  If you’re doing a Return on Investment (ROI) over several years, remember to factor in increases in annual maintenance costs over time and any further purchases you might need to make, for example upgrades or further roll out of the capacity software. 
It’s also worth bearing in mind that your ROI will look more realistic to the CIO if you remember that although software costs are incurred on day one, the benefits and savings identified in the previous three blogs take time before they are enjoyed.  Offset the savings to allow time for implementation, training and usage of the new software and processes before the benefits begin to pay dividends.  Likewise be realistic about factoring in when training and implementation services will take place.  Unless you are paying for everything up front, there could be costs that can be deferred.
Resources from other teams that are necessary to implement new CM software or processes should not be forgotten.  In my early days, the Capacity Manager would install Athene himself.  Nowadays, and for very good reason, we need the support of sys admins, the security team, the network team and more before a CM software solution can be put in place.  These initial costs need to be included. 

Likewise there will be on-going cost for support from these other teams.  As we had ‘soft savings’ these are ‘soft costs’.  The business is paying these peoples’ wages already but if they weren’t supporting the CM team, they could potentially be adding value to the business in other ways.  Typically this is a minor element of overall costs.

You must also allow for staff costs and any physical resources such as servers and databases to support your CM software.  If you have a CM team of four before and after purchase, you could consider staff costs to remain the same but remember the ‘soft savings’ on staff.
Those same four people will be able to do a lot more thanks to the new software and/or processes.  A value could and should be put against this increased efficiency. 
For hardware resources to support the CM team, again remember the cost of upgrades, additional hardware and price increases over time.
In my final blog on Wednesday I’ll be taking a look at return on investment rather than purely cost justification.

Andrew Smith
Chief Sales & Marketing Officer

Friday 26 August 2011

Can you really justify capacity management? – 3/5

Last time I considered various areas around which one could associate cost savings that will be derived from buying capacity management (CM) software or implementing better capacity processes.  Some of those savings offer more direct financial benefits than others.  Your CIO might question some of your claims if you are trying to justify expenditure, so I thought I would expand on some of those savings a little more, to try to help you handle the questions.

The areas of savings we considered were:

1.      Less outage due to capacity issues

2.      Less times when service levels are poor

3.      More productive CM staff

4.      Fewer performance and capacity crises needing fixing

5.      Deferring hardware purchase, including things such as accelerating virtualization

6.      Consolidating CM software products

I divide these into two categories: hard and soft. 
Hard savings are where there is direct money savings that the business will see. 

Soft savings are where there are notional savings, but hard cash isn’t necessarily how the value of the saving is derived.
In most cases, I find our Athene CM software is readily justified on the basis of hard savings alone. 

Hard Savings
I see ‘less outage due to capacity issues’ and ‘deferring hardware purchase' including things such as accelerating virtualization’ as hard savings. 

Downtime costs the business money – lost sales to a business if their web site that sells direct to the public is down.  Most financial businesses put a huge value against lost business due to downtime, hence the drive over the years to 99.999 or ‘five nines’ availability.
Not buying hardware until later saves real money from your hardware budget.  It has a monetary value, it has an ‘opportunity cost’ value as well – that money can be used to create value elsewhere.  However you view it, it has a value for the business. 

There is further hidden value as well.  Due to effects such as Moore’s law, which states that the price performance of computing power doubles every eighteen months, every $1 not spent now will either buy more power or the same power for less than $1 in the future.  The longer the delayed spending, the greater the saving becomes.  Ask any accountant – the value of money can be significant, even in these times of very low interest rates.
Consolidating CM software products is a clear hard saving.  If you can consolidate several CM tools and replace them with one tool, such as Athene, the maintenance costs of those products are a direct saving. Over time such savings can be very large.

Soft Savings
Sometimes people talk about areas of saving as if they were actual cash savings, but in fact they aren’t.  Examples are ‘more productive CM staff’ and ‘fewer performance and capacity crises needing fixing’.

One can often make a case that introducing a product like Metron’s Athene means one person can do twice as much work.  We notionally put a saving against Athene of one person’s salary.  Unless that person is let go, which never happens, then that saving doesn’t happen in cash terms.

What is true however is that the person ‘saved’ by the product is free to do other work.  Assuming that whatever else they do adds value to the business, this is a genuine benefit to your organization.  Expressing their salary as a cost saving, even though that salary continues to be paid, is usually the best way of expressing this benefit financially.  The same holds true of time saved for non-CM team members who benefit from better capacity management, e.g. sys admins who spend less time fire fighting problems.
Less occasions when service levels are poor is a harder nut to crack.  It should be a hard saving like downtime.  Unfortunately it’s harder to get the business to put a value on times when service levels are poor, as distinct from when they are non-existent.  If the system is running slowly, business could be lost. 

Imagine a web site that responds slowly when someone goes on to inquire, even though it is up and running.  Your visitor may get frustrated and type in the URL of a competitor.  It’s impossible to gauge if they would have bought had they stayed, and how much they would have spent.  Thus slowtime tends to need the business to estimate its value.  This is often most readily done by valuing it as a percentage of downtime.  For example, making an assessment that for every $1 of downtime we are likely to experience, a further 10 cents are lost due to slowtime.

CIO’s love to ask you for a return on investment calculation and then savage you if you include soft savings. Of course, in my opinion, it is those closest to the business, such as the CIO, who should be able to put a value on those soft areas most easily.  Ah well, they’re the boss.
On Monday I’ll consider the other side of the equation; what costs for doing better CM you need to allow for, that reduce the value of the savings we have considered to date.

Enjoy your weekend.

Andrew Smith
Chief Sales & Marketing Officer