Gartner stated that "through
to 2015, more than 70% of private Cloud implementations will fail to deliver
operational energy and environmental efficiencies"
(Extract from “Does Cloud Computing Have a ‘Green’ Lining? – Gartner Research 2010)
Why? Well Gartner are referring to the need to implement an organization structure with well-defined roles and responsibilities to manage effective governance and implementation of services, well-defined and standardized processes (such as change management and capacity management) and a well-defined and standardized IT environment. Cloud-computing technology (such as service governor, automation technology and platforms) itself is evolving, and thus, the efficiency promises will not be delivered immediately.
"Old habits" include departmental infrastructure or application hugging. Within an organisation what you tend to find most often is departments develop a "silo mentality", i.e. mine is mine. They become afraid to share infrastructure through a lack of trust and confidence, because they fear an impact on their own services they adopt the we need to protect ourselves from everybody else approach.
The problem with this attitude is it can lead us back to the "just in case" capacity mind set, where you end up always over provisioning systems By using effective capacity management techniques and combining with the functionality that vSphere provides you can get that right, by sizing and provisioning more accurately avoiding over provisioning. Performing application sizing at the first level will help you get the most efficient use out of your infrastructure as possible and ultimately lead to achieving that equilibrium between service and cost.
So how can we "Avoid an Internal Storm" and "Ensure a Brighter Outlook"?
Firstly, ask yourself this question - What is different with Cloud Computing, in terms of Capacity Management, compared with how we've always done it?
Typically, we still need to apply the same capacity management principles such as getting the necessary information at the Business, Service and Component levels. But we have to take into consideration the likelihood that the Cloud is underpinned by Virtualization and more specifically the use of resource pools. Therefore in this case, we need to be aware of what Limits, Shares and Reservations are set and what VMs are running in which pools in our cluster(s). Earlier we displayed a chart of a priority guest resource pool and the CPU usage of the guests within it. We need to identify limits. How much are our virtual machines allowed to use? Do they have a ceiling? What is the limit? When we talk about utilization is it the utilization of a specific CPU limit for example?
Do they have higher priority shares over others? What about any guarantees? Are they guaranteed resources because they may be high priority guests? And are there any CPU infinities assigned.
Information we need.
Once we have all of the required information, we can implement automatic reporting and alerting and that’s what I’ll be covering in the last of my series on Monday....
Jamie Baker
Principal Consultant
(Extract from “Does Cloud Computing Have a ‘Green’ Lining? – Gartner Research 2010)
Why? Well Gartner are referring to the need to implement an organization structure with well-defined roles and responsibilities to manage effective governance and implementation of services, well-defined and standardized processes (such as change management and capacity management) and a well-defined and standardized IT environment. Cloud-computing technology (such as service governor, automation technology and platforms) itself is evolving, and thus, the efficiency promises will not be delivered immediately.
"Old habits" include departmental infrastructure or application hugging. Within an organisation what you tend to find most often is departments develop a "silo mentality", i.e. mine is mine. They become afraid to share infrastructure through a lack of trust and confidence, because they fear an impact on their own services they adopt the we need to protect ourselves from everybody else approach.
The problem with this attitude is it can lead us back to the "just in case" capacity mind set, where you end up always over provisioning systems By using effective capacity management techniques and combining with the functionality that vSphere provides you can get that right, by sizing and provisioning more accurately avoiding over provisioning. Performing application sizing at the first level will help you get the most efficient use out of your infrastructure as possible and ultimately lead to achieving that equilibrium between service and cost.
So how can we "Avoid an Internal Storm" and "Ensure a Brighter Outlook"?
Firstly, ask yourself this question - What is different with Cloud Computing, in terms of Capacity Management, compared with how we've always done it?
Typically, we still need to apply the same capacity management principles such as getting the necessary information at the Business, Service and Component levels. But we have to take into consideration the likelihood that the Cloud is underpinned by Virtualization and more specifically the use of resource pools. Therefore in this case, we need to be aware of what Limits, Shares and Reservations are set and what VMs are running in which pools in our cluster(s). Earlier we displayed a chart of a priority guest resource pool and the CPU usage of the guests within it. We need to identify limits. How much are our virtual machines allowed to use? Do they have a ceiling? What is the limit? When we talk about utilization is it the utilization of a specific CPU limit for example?
Do they have higher priority shares over others? What about any guarantees? Are they guaranteed resources because they may be high priority guests? And are there any CPU infinities assigned.
Information we need.
•
Business - how many users is the service
supporting? What resources are required? Are we likely to experience growth in this
service? If so, by how much and when?
Monthly, Quarterly or Annually?
•
Service – have the Service Level Requirements
(SLR) been defined and / or have they been agreed (SLA) and what do they
entail? What, if any, are the penalties
for not meeting the terms stated in the SLA?
•
Component – gather and store the configuration,
performance and application data from the systems and applications hosting the
service in a centralized database that can be readily and easily accessed to
provide the key evidence on whether services are and will in future satisfy the
requirements as stated in the SLAs.
ITIL v3 Capacity Management explains
about the kind of information you should be gathering at these levels. Having this information enables us to get the
full picture of what is currently happening and allows us to forecast and plan
ahead based on the business growth plans for the future. Once we have all of the required information, we can implement automatic reporting and alerting and that’s what I’ll be covering in the last of my series on Monday....
Jamie Baker
Principal Consultant
No comments:
Post a Comment