Friday, 29 May 2015

The changing face of Capacity Management - Private Clouds(4 of 5)

So, how does capacity management change when moving from a server-oriented or client-server environment to a private cloud?

First, it's important to understand that most organizations have many internal customers -- and that moving to a private cloud eliminates the separation of computing resources that client-server computing had always promised.  For some of those customers, this may be an uncomfortable arrangement and it's important that capacity management and the regular communication of capacity and performance information be in place to satisfy those customers.

Performance and capacity monitoring must be done at multiple levels.  While it's possible (and useful) to monitor what happens inside virtual machines, it's far more crucial to monitor from a more global perspective.  How much additional capacity is available for unseen peaks and for quick virtual machine deployments?

Over-procurement and over-allocation of resources to one internal group may not be a major expense.  But when resources are over-allocated to dozens of internal groups, the incremental cost to the organization itself could be quite large.  Furthermore, over-allocation of physical resources to manage the cloud means that hardware is being purchased too far in advance, likely for more money than if those purchases were delayed until needed.  Of course these concepts are nothing new -- it's just that within the cloud those additional purchases may be seen as necessary overhead, when in fact it's just a problem of over-allocation pushed up from individual customers through to the organization itself.

Likewise, under-procurement and under-allocation of resources can cause problems in a cloud environment.  One of the selling points of the cloud is rapid deployment and complete flexibility - when a customer needs resource, it can be quickly made available.  If a company doesn't invest in having *enough* resources available, then there is little advantage for internal customers to agree to a move to a private cloud.  Further, the inability to provide sufficient resources in this model means that many internal customers may not be meeting service level agreements at crucial times.

In my final instalment on Monday I'll talk about implementing a capacity management mindset that specifically deals with some of the challenges of working with a private cloud.


Rich Fronheiser
Chief Marketing Officer

Wednesday, 27 May 2015

The Changing Face of Capacity Management - Private Clouds (3 of 5)

Looking at the present and future of Capacity Management, it's clear that managing cloud environments is incredibly important as more organizations decide to move much of their computing to the cloud.

The first type of cloud I want to cover is the private cloud.  

In many cases, a private cloud implementation involves organizations using virtualization and other technologies in-house that public cloud providers use when delivering their services.  In a traditional cloud implementation, services are delivered via the Internet.  In a private cloud, services may be delivered internally by other means.

For example, an organization could decide that it wants to change how it manages its Windows and Linux estate.  The company at this point decides to make an investment in VMware and turn all the existing physical servers into virtual machines to be managed centrally by a VMware administration team and using a lot of the automation VMware builds into vSphere.

Sounds good, right?

Well, one of the arguments for cloud computing is that clouds relieve the organization from day-to-day management and computing becomes much more of a utility (turn on the switch and it just works).  In private clouds that are implemented in-house, none of this is true.  Companies have to buy, build, and manage the environments and also deal with the complexity of having many applications running simultaneously in a virtual environment.

Still, companies feel that this is a good investment and, in many cases, so do I.  However, it's just as crucial, probably more so, that the environment be properly planned and managed.  In a typical application that runs on a server if the server runs into capacity and performance problems, only one application or service is affected.  In a private cloud, a shortage of capacity could affect all of the applications and services running within that cloud.

As of right now, most companies who are implementing virtualization technologies internally (and are taking advantage of technologies that allow for the rapid and seamless deployment and reallocation of resources) are setting up their own private clouds.
 
On Friday I'll deal with some of the things that need to be considered when looking at a private clouds.


Rich Fronheiser
Chief Marketing Officer

Monday, 25 May 2015

The changing face of Capacity Management (2 of 5)

The view of capacity planning when I started in this profession over 15 years ago is different now than it was then.  Yes, many mainframers have told me that they had virtualization back then and nothing new is really new again.  I'll concede that to a certain degree, but for those of us who came into a client-server world, things have changed dramatically.

Back then, planning for me was the building, calibrating, evaluating, and creating scenarios revolving around analytic models for a single system.  Workloads were the items of importance and quite frequently we looked at technical numbers and pretty much ignored the business.

Things have changed.

Today I spend more time talking about capacity as the overall amount of capacity remaining in a virtualized datacenter rather than the capacity remaining on a server.  I spend more time talking about central storage and SAN performance than I do talking about locally-attached storage.  And for me the workload view has been replaced in many cases with a real-user transactional view we couldn't hope to get back in the days before Y2K.  It's the only way we know whether we're meeting those SLAs.  With services hitting many different tiers (and potentially disappearing into the cloud), that transactional view can actually be the easiest way to measure performance.  How much of that transaction is visible to the planner depends on the cloud implementation and on the service provider.

So, the cloud. Sure, it would be ideal to have knowledge of everything that's happening inside the cloud, but if you're paying for software-as-a-service (SaaS), you probably aren't going to get that from your vendor -- and, to be honest, why would you want that view?  You are paying a premium in order to reduce computing down to the utility level -- flip the switch and the light comes on.  When I turn a light on in my kitchen, I don't think about the wires, the devices, or the electrical grid -- I just want to know that the light will come on.  And I take for granted that there's enough electricity in the system to service all my household needs.

But we pay a price for that, and there's a chance that instead of paying for excess capacity in your data center, you're paying for excess capacity (for yourself and for other clients of the vendor) in the cloud.

In the next few installments, I'll start looking at clouds in more detail.  I'll focus on the different types of clouds, the different types of services that can be purchased from cloud providers, and how that affects how you'll do capacity planning.

Rich Fronheiser
Chief Marketing Officer

Friday, 22 May 2015

The Changing Face of Capacity Management(1 of 5)

Recently, I spent an evening talking about the current and future view of Capacity Management with an old friend over a few pints of beer.  I know, I know, normal people talk about sports and their kids, but we got down this train of discussion and there was no turning back, at least until the pint needed refilling.  Before we get to the conversation in a future installment, let's set some background....


What we used to call Capacity Management just five years ago isn't quite the same today, at least not from the perspective of many of our clients and prospects.  While hosting a training session this morning, I noticed that about half the attendees mentioned that their companies are considering or have already implemented a cloud strategy.  And, to me, the cloud changes the game for Capacity Management entirely.


Back in *my* old days (about 15 years ago, when I first got involved in what we called "Capacity Planning and Performance Analysis" we simply captured data from systems and made sure that we had enough CPU and memory headroom and that the disks were performing as expected and that we weren't going to run out of disk space.


Then came client-server computing.  Then came complex suites of applications and middleware.  Then came the advent of virtualization followed by the mainstreaming and proliferation of virtualization.  The world of the capacity manager became more complex, but we still we able to look at all of the resource level data and we were still able to manage the infrastructure in order to provide the right amount of capacity at the right time.


The cloud is the game changer, in my opinion.  Recently, I was speaking to someone who I respect greatly who lamented that we need to be able to see anomalies on systems that are running in "the cloud" in order to do our jobs.  And it was at that moment that I realized that the old-school mindset of capturing resource utilization data was simply not enough (or in some cases not even possible) and that our ways of thinking must change.  So, what does that epiphany do for the capacity manager and the vendors who are trying to sell Capacity Management products and services?


More on Monday……………

Rich Fronheiser
Chief Marketing Officer

Tuesday, 19 May 2015

Capacity Management - What are we trying to tell the business?

Interacting with customers sometimes throws up a question we’re sure we should know the answer to, but ends up being not as simple to answer as we’d expected.  One of those questions that really makes us sit down and ponder how to answer it.

So here’s my question:  As a Capacity Manager, what am I trying to tell the business?
Am I trying to tell them about Utilizations? Headroom? Risk? Costs? Customer Service?
There are so many things I could be telling the business it’s hard to say “This is what I’m providing to the business”.

It struck me that if I can’t provide the answer then maybe I’m trying to answer the wrong question.  Rather than dictate to the business what I can tell them doesn’t it make more sense to be asking them, “What is it that you want to know?”

As part of maturing their Capacity Management processes one of our clients is doing just that. They are successfully engaging with all manner of business units within their organization, showing them the sorts of things they can do and then asking the question.  “What information do you want to have? What is actually going to be helpful/useful to you?”  That might be a single metric on the intranet capacity report, or something with a lot more detail.

There are probably 3 main factors that have come into play in this successful initiative.

1.      The implementation of our Capacity Management tool, athene®,  that gives them the ability to easily import and report on the metrics the business units are interested in.  Be that Searches, Transaction Response times, Transaction counts – in fact any time date stamped metrics that they want.  Whatever that part of the business considers to be the most important metric(s) to them.

2.      Integration with a real user monitoring APM tool.  Being able to see exactly what the customers (internal and external), are doing and experiencing.

3.      Having a member of staff on the capacity team who has a business background and the social skills to match.  Someone who can engage with the right people, who knows what they are currently doing to get their stats and who can learn how to integrate them with the platform statistics (CPU, Memory etc).

Bringing these factors together has resulted in heightening the profile of the Capacity Management Team and showing their real value to the business. Business units are now approaching them and asking for data to be included because they want the same advantages they see other departments getting from the data.
So what are we trying to tell the business?  I’m here, and I’ve got some really great stuff I can do to help you.  What is it you want to know?

Don't forget to register for our 'Telling the Capacity Management Story' webinar May 27 http://www.metron-athene.com/services/webinars/index.html


Phil Bell

Consultant

Wednesday, 13 May 2015

360° Capacity Management - an end to end view (7 of 7)



On Monday I described some types of data needed to provide 360° Capacity Management. 

Those included server and mainframe resource data, centralized storage data, and network data – all kept in a historical Capacity Management Information Store (CMIS) that allows for quick retrieval and analysis by the Capacity Manager.  I’ll wrap the series today by mentioning some additional types of data that the capacity manager should have at hand:
Application data is incredibly important – many applications store key information in log files or in some other way so that the Capacity Manager can easily get at this data for yet another view that can help him make reasoned decisions and recommendations. 

Sometimes, application data is processed in a way by a tool that makes the data immediately valuable – an example of this would be the way 
athene© ES/1 takes mainframe application data, establishes severity levels for the data, and makes key recommendations.  Other application data needs to be brought into the CMIS in another fashion – an example of this would be the way Metron’s flagship product, athene®, uses Integrator Capture Packs to bring key application data into the athene© CMIS.



Facility and data center data is important, as well.  One of the key items typically ignored by companies is the amount of power and other resources used by IT hardware.  Over-configuring the data center may sound like a way to ensure adequate capacity and the meeting of SLAs, but such a mindset can come at a huge cost.
I’d be remiss to not include key business metrics in the set of data needed by the Capacity Manager.  Key business data can include the number of business transactions over a particular time interval, the number of web hits to a server, or even the amount of money spent on things like centralized storage resources. 

Key business drivers can help the capacity manager identify periods of peak demand and can help the capacity manager predict resource requirements moving forward.

Finally, we can’t possibly ignore the end-to-end view and the perspective of the end-user.   I remember a Capacity Management challenge I faced years and years ago.  End-users were not receiving the service they were promised and IT was on the hook and most people in IT spent time pointing fingers at other parts of IT.  The endgame was that we went back to an old version of the application and everyone was happy again – we scrapped the new version entirely.

But why did we do that?  In the end, I’m convinced we did that because we had no way of figuring out which piece of the application was taking so long and it was easier to go back to something that just worked.

Having a mechanism to capture end-to-end response times for every production transaction as well as a mechanism to determine how much time is being spent at each hop in those transactions can be a key way to police service level agreements, troubleshoot existing problems, and help negotiate or renegotiate future SLAs.  Being able to store this data historically for the Capacity Manager to use in the future is important, as well.
360° Capacity Management is Metron’s philosophy for Capacity Management in the era of cloud computing, virtualization, and ever-increasing complexity in the data center.  Feel free to contact me if you would like to talk more about specific solutions or about your organization’s philosophy on Capacity Management.

http://www.metron-athene.com/products/athene/index.html

Rich Fronheiser
Chief Marketing Officer

Monday, 11 May 2015

360° Capacity Management - What kinds of data and information are needed to provide views from every angle (6 of 7)


In previous parts of my series, we talked about 360° Capacity Management and why it’s crucial to look at capacity management from every possible angle.  Today, I wanted to talk a bit about what kinds of data and information are needed to provide views from every angle.
Obviously, capturing server and mainframe resource performance and capacity data is crucial and the ability to store that data historically to identify trends (as well as peaks and valleys) is equally important.  If a server or mainframe is causing a performance and capacity problem with a service (or soon will), it’s important that such a bottleneck be removed as quickly as possible.  Without proper data, it’s impossible to be proactive in removing potential bottlenecks.

And yet server and mainframe resource data is only the tip of the iceberg.
Centralized storage and high-speed data networks are resources that are vital when it comes to providing today’s services.  Servers and mainframes can all have adequate capacity, but if there’s a bottleneck within storage devices or in the networks, service level agreements will not be met and customers will be unhappy. 

In the past, capacity managers would simply say, “That’s not my department – we have storage and network teams that handle those issues.” 

Unfortunately, the customers and the end-users don't know why their service is performing poorly – they just know that it is.  And for Capacity Management to operate at the higher levels (as described within ITIL) – Service Capacity Management and Business Capacity Management – it’s vital that capacity management take a level of responsibility for considering all the resources that combine to make a service meet SLAs – not just the servers and mainframes.

The key to varied types of data is the ability to bring this data into one central Capacity Management Information Store for quick analysis, reporting, trending, and alerting.   A good Capacity Management solution will have this capability built-in.  We’ll talk a bit about this in the final part of my series.
Beyond servers, mainframes, networks, and storage, there are other types of data needed to provide 360° Capacity Management.  I’ll touch on those as well on Wednesday in the final  installment of my series.

http://www.metron-athene.com/products/athene/datacapture/index.html

Rich Fronheiser
Chief Marketing Officer

Friday, 8 May 2015

360° Capacity Management – looking at capacity and performance from every angle ( 5 of 7)


In the last installment, we talked about 360° Capacity Management – looking at capacity and performance from every angle.  Sounds like a daunting task, but it isn’t as long as you have a way to get disparate sets of capacity and performance data into an easy-to-mine data repository.
And, not to dig too deeply into the nuts and bolts, there can be a lot of different types of data – from mainframes, servers, storage, networks, databases, applications, facilities, and key end-user experience metrics that tell your analysts whether or not your customers should be happy.

ITIL refers to this repository (or set of repositories) as the Capacity Management Information Store or CMIS.  It’s a key component of 360° Capacity Management because the views from every angle not only need to be available, they need to be available historically – you need to know how things are changing over time.
A key requirement of a practical CMIS is the ability to import data from a wide variety of source in an easy and automated way.  Metron has always believed that this is a key feature of a quality CMIS and therefore builds this capability into athene® -- our flagship Capacity Management solution.

Regardless of which Capacity Management solution you use, the ability to bring data from these disparate sources into a single pane of glass is vital.  Automated reports can give a day-to-day and a historical picture of your capacity and performance from every possible angle.  Having these different angles available at any given moment allows the analyst to quickly determine the root cause of performance and capacity incidents.  The historical data as well as some predictive trending and modeling tools help the analyst be proactive in order to minimize these types of incidents going forward. 
Minimizing capacity issues while also right-sizing an environment saves real money – both from not losing customers due to capacity and performance incidents and by also delaying purchases until they are truly needed.


On Monday we’ll take a closer look at the types of data you’ll need and give you some real world examples of products that can provide these types of data.

Rich Fronheiser
Chief Marketing Officer

Wednesday, 6 May 2015

The current IT challenge - do a lot more with a lot less( 4 of 7)

Recently I heard from a family member that her company’s IT organization was ordered to cut costs by 20% across the board.  The organization was still expected to support a new, important initiative and meet the business’s requirements, though. The company’s motto appears to be “do a lot more with a lot less.”

There’s no doubt that today’s business environment is challenging.  Money is tight and everyone in IT is being asked to do more with less.  Sounds easy, right?  After all, computing and IT began as a labor-saving (and cost-saving) activity.
In practice, however, it’s hard to save money in a world where technology is constantly evolving – new technology and services requiring more powerful hardware, greater overall spending on software, and investment in people and skills that constantly need updating.

As these costs continue to increase, budgets have shrunk, making it very difficult to meet the needs of a more demanding group of customers.  How can one expect to stay current, stay within a shrinking budget, and still meet required service levels?

Proper Capacity Management is the key to ensuring that you are utilizing your IT resources in the most cost-effective way possible.  Many of you reading this blog will be nodding along and will feel quite certain that you have this area well under control.  It’s likely that many of you have plenty of tools at your disposal and performance and capacity data is available across the enterprise.
It’s a complex world now, though.  A transaction can span many layers, including the Internet, along with dozens of servers, databases, networks, and data warehouses.  This is common in the age of virtualization and cloud computing.  Are you sure you have every necessary view into a service so that you can continue to provide outstanding service while controlling costs?

A broader perspective on Capacity Management is needed -- 360° Capacity Management, where the capacity manager looks at service quality from all possible angles, making sure everything is looked at and nothing is missed.  In the final two installments, we’ll talk about how 360° Capacity Management can provide one window on capacity and how bringing all those disparate pieces together can help control costs while providing continued outstanding service to your customers.
On Friday I'll talk a bit about what kinds of data and information are needed to provide these views from every angle. We'll be running our next webinar 'Telling the Capacity Management Story' this month, be sure and sign up http://www.metron-athene.com/services/training/webinars/index.html
Rich Fronheiser
Chief Marketing Officer





Monday, 4 May 2015

Capacity Management: Is the mainframe still your most mission-critical server?(2 of 7)


A friend of mine bought a 20-foot boat last summer so he could take his family tubing, water skiing, and fishing.  It’s fast, fairly easy to operate, and relatively inexpensive to fix and maintain.  One person can do most of the tasks required to get the boat in and out of the water and it doesn’t take long to learn how to do most tasks involving a small craft like his.

This summer, my family is planning on spending a week’s vacation on a cruise liner.  In doing some research, I found that the boat, at top speed, is only half as fast as my friend’s 20-foot boat and common sense (and a very long 1996 movie) tells me that the ship can’t turn nearly as quickly in the water.   And it takes thousands of people to make a cruise ship do what it does, from deck hands and entertainment staff all the way up through the ship’s captain.

In other words, a novice may be able to keep a speedboat running, but that doesn’t mean he knows how to operate a cruise liner.  But operated well, a cruise liner is a smoothly operated vessel that will make thousands of people satisfied customers.

Think of the mainframe as that cruise liner.  Many organizations have invested a lot of time and money into making sure that the mainframes are operating at peak efficiency – much of that knowledge, however, is held within long-tenured, very experienced employees who may be thinking about buying their own fishing boats and heading into retirement.

More specifically, does the end of working life for the generation of ‘baby boomers’ mean problems for the many businesses for which the mainframe is still the most mission-critical server?


The story sounds dire, but it doesn’t have to be.  We’ll pick up on Wednesday and talk about a way to keep crucial mainframe expertise in place even when your mainframe experts are no longer an email or phone call away.

Rich Fronheiser
Chief Marketing Officer

Friday, 1 May 2015

Capacity management, The view from the top (1 of 7)


Throughout the year, I get the opportunity to talk to many IT professionals.  Some of them are quite seasoned and some of them are recent graduates filling one of their first professional roles.  Some are junior analysts, others are senior technicians and planners, and a few are the managers and executives of the organization.

The conversations I have with those people can be quite different, depending on their roles with the organization. 

When I talk to technicians, I tend to talk more about product and how the effective use of a Capacity Management product can make them more efficient and able to achieve results quicker and easier.

When I talk to a senior manager, VP, or CIO, I’m focused more on how we can help them achieve business objectives – a subtle difference, but an important one.  Senior managers in general are very focused on outcomes and less focused on the mechanisms for achieving those outcomes.

One of my favorite conversations with a CIO resulted in him telling me that anyone he talks to must answer the question, “How will doing business with you save me money?”  And that CIO was right – any investment that a company makes in software, hardware, or other technology has to, in one way or another, save or make the company money.

This blog series will focus on the CIO or any senior strategy decider in an organization that relies on mainframes to perform critical business functions.
Come back on Monday and we’ll talk about a problem faced by many CIOs and other decision-makers in mainframe organizations – the aging and retirements of valued baby-boomer employees who hold much of the knowledge about how to keep the mainframe (and hence much of IT) running at peak effectiveness.




Rich Fronheiser

Chief Marketing Officer