Normally, a capacity manager would expect to capture a
“business cycle” worth of data in order to determine the peak periods and the
proper modeling intervals. Without that,
the capacity manager would need to gather information about the application via
interviews from the business side and would need to rely on that information.
I needed to use the data in combination with the
information from the subsequent interviews – I had data for about a month – and
in some cases, I and my team, had to model the annual peaks based on the data
we had captured in combination with the information we got from the interviews.
Sample
interview questions to ask are:
•
What are the current business volumes?
•
What are the predicted business volumes?
•
What is the application architecture?
•
Are there any predicted changes to the business, the
business volumes, the architecture or anything that will change the nature of
the application or the infrastructure?
Most of the time in the interview process was spent
getting to know how the applications worked.
These applications were very complex and many of them spanned 3-4 tiers
and included the mainframe or a large data warehouse backend.
It was important to understand how the web servers,
for instance, interacted with the middleware or the application servers and how
workloads here affected traffic to the database servers or to the
mainframe.
This process was incredibly useful in getting enough
background to make a month’s worth of data adequate (if not ideal) for us to
make decision support recommendations.
However, this process was not without pain, which we’ll investigate
shortly…
Moving
forward….
As part of the scoping exercise done prior to the start
of the formal project, applications were prioritized by the Capacity
Manager. This allowed me to group the
applications in a meaningful way and assign them to consultants who then took
on primary responsibility for completing the work.
Once the consultants received their assignments, they
poured through the data and the application architecture diagrams and, as would
be expected, found some gaps in the information or some clarification that
needed to be made. Additional meetings
were held and more information gathered.
Remote access was provided to the CDB/CMIS so the
consultants could work remotely on the project and have the most recent data
from the systems.
Other data was gathered from existing sources,
including data from the SAN as well as network statistics and some native
virtualization statistics.
Modeling was completed for each of the applications –
for some applications trending was used, for others analytic modeling was
used.
A standard report template was used to provide a
common look-and-feel for each of the applications. These reports were delivered, as early as
possible, to the Capacity Manager and the staff in order for the reports to
have a sanity check applied. One common
feeling by the consultants was that it would’ve been preferable for those doing
the work to capture the information. I
certainly agree, but this is one of the ways I tried to minimize the project
preparation time.
Once the drafts were approved or additional
information was provided, final reports were written and delivered for
management (mainly the CM and the mid-level managers).
I’ll be
dealing with how we summarized the results on Monday.
Rich
Fronheiser
Chief Marketing Officer
No comments:
Post a Comment