Business-critical and high profile applications typically receive the most attention from the capacity planners and performance analysts. While those applications will continue to receive a lot of analyst and modeler attention, remaining modeler cycles can be targeted to those systems that appear to need detailed analysis.
As a result of building and viewing trend exception reports and trend alerts, data center and business unit management can better prioritize modeling efforts to target the systems that are likely to require upgrades or the shifting of workloads in order to meet acceptable service levels in the future.
The process of analytic modeling includes selecting appropriate modeling intervals, building baseline models, calibrating the models, and developing what-if scenarios. Many existing books and CMG papers have been written on the subject.
Modeling, however, is not simply applying a favored modeling tool to a randomly chosen set of data. The key pieces of applying modeling techniques include knowing what modeling can (and cannot) provide and having (or having available) in-depth knowledge of the application along with a business metric of interest that’s going to be the focal point of the modeling study. I’ve trained many modelers using multiple modeling tools and I always have the same message – a tool will never replace your knowledge, your experience, your “feel” for how to complete a study. Performance management and capacity planning is a mindset and successful modelers have that mindset – knowing what information is important and what information isn’t – and also knowing how much fine tuning is necessary in the modeling process. Having a reasonably accurate answer in a couple of hours is in many instances preferable to having a pinpoint accurate answer that takes a few weeks or even a few days.
To download the full version of this blog visit http://www.metron-athene.com/_downloads/_documents/papers/too_many_servers.pdf
Rich FronheiserConsultant
No comments:
Post a Comment