Showing posts with label cloud computing. Show all posts
Showing posts with label cloud computing. Show all posts

Monday, 6 March 2017

The Changing Face of Capacity Management - Private Clouds (2 of 4)

Looking at the present and future of Capacity Management, it's clear that managing cloud environments is incredibly important as more organizations decide to move much of their computing to the cloud.

The first type of cloud I want to cover is the private cloud.  

In many cases, a private cloud implementation involves organizations using virtualization and other technologies in-house that public cloud providers use when delivering their services.  In a traditional cloud implementation, services are delivered via the Internet.  In a private cloud, services may be delivered internally by other means.

For example, an organization could decide that it wants to change how it manages its Windows and Linux estate.  The company at this point decides to make an investment in VMware and turn all the existing physical servers into virtual machines to be managed centrally by a VMware administration team and using a lot of the automation VMware builds into vSphere.

Sounds good, right?

Well, one of the arguments for cloud computing is that clouds relieve the organization from day-to-day management and computing becomes much more of a utility (turn on the switch and it just works).  In private clouds that are implemented in-house, none of this is true.  Companies have to buy, build, and manage the environments and also deal with the complexity of having many applications running simultaneously in a virtual environment.

Still, companies feel that this is a good investment and, in many cases, so do I. However, it's just as crucial, probably more so, that the environment be properly planned and managed.  In a typical application that runs on a server if the server runs into capacity and performance problems, only one application or service is affected.  In a private cloud, a shortage of capacity could affect all of the applications and services running within that cloud.

As of right now, most companies who are implementing virtualization technologies internally (and are taking advantage of technologies that allow for the rapid and seamless deployment and reallocation of resources) are setting up their own private clouds.
 
On Wednesday  I'll deal with some of the things that need to be considered when looking at a private clouds.

Rich Fronheiser
Chief Marketing Officer

Wednesday, 27 May 2015

The Changing Face of Capacity Management - Private Clouds (3 of 5)

Looking at the present and future of Capacity Management, it's clear that managing cloud environments is incredibly important as more organizations decide to move much of their computing to the cloud.

The first type of cloud I want to cover is the private cloud.  

In many cases, a private cloud implementation involves organizations using virtualization and other technologies in-house that public cloud providers use when delivering their services.  In a traditional cloud implementation, services are delivered via the Internet.  In a private cloud, services may be delivered internally by other means.

For example, an organization could decide that it wants to change how it manages its Windows and Linux estate.  The company at this point decides to make an investment in VMware and turn all the existing physical servers into virtual machines to be managed centrally by a VMware administration team and using a lot of the automation VMware builds into vSphere.

Sounds good, right?

Well, one of the arguments for cloud computing is that clouds relieve the organization from day-to-day management and computing becomes much more of a utility (turn on the switch and it just works).  In private clouds that are implemented in-house, none of this is true.  Companies have to buy, build, and manage the environments and also deal with the complexity of having many applications running simultaneously in a virtual environment.

Still, companies feel that this is a good investment and, in many cases, so do I.  However, it's just as crucial, probably more so, that the environment be properly planned and managed.  In a typical application that runs on a server if the server runs into capacity and performance problems, only one application or service is affected.  In a private cloud, a shortage of capacity could affect all of the applications and services running within that cloud.

As of right now, most companies who are implementing virtualization technologies internally (and are taking advantage of technologies that allow for the rapid and seamless deployment and reallocation of resources) are setting up their own private clouds.
 
On Friday I'll deal with some of the things that need to be considered when looking at a private clouds.


Rich Fronheiser
Chief Marketing Officer

Friday, 17 April 2015

Old Habits & Potential Risks (9 of 10)

Gartner stated that "through to 2015, more than 70% of private Cloud implementations will fail to deliver operational energy and environmental efficiencies"

(Extract from “Does Cloud Computing Have a ‘Green’ Lining? – Gartner Research 2010)

Why?  Well Gartner are referring to the need to implement an organization structure with well-defined roles and responsibilities to manage effective governance and implementation of services, well-defined and standardized processes (such as change management and capacity management) and a well-defined and standardized IT environment. Cloud-computing technology (such as service governor, automation technology and platforms) itself is evolving, and thus, the efficiency promises will not be delivered immediately.

"Old habits" include departmental infrastructure or application hugging. Within an organisation what you tend to find most often is departments develop a "silo mentality", i.e. mine is mine.  They become afraid to share infrastructure through a lack of trust and confidence, because they fear an impact on their own services they adopt the we need to protect ourselves from everybody else approach.

The problem with this attitude is it can lead us back to the "just in case" capacity mind set, where you end up  always over provisioning systems  By using effective capacity management techniques and combining with the functionality that  vSphere provides  you can get that right, by sizing and provisioning more accurately avoiding over provisioning.  Performing application sizing at the first level will help you get the most efficient use out of your infrastructure as possible and ultimately lead to achieving that equilibrium between service and cost.

So how can we "Avoid an Internal Storm" and "Ensure a Brighter Outlook"? 

Firstly, ask yourself this question - What is different with Cloud Computing, in terms of Capacity Management, compared with how we've always done it?   

Typically, we still need to apply the same capacity management principles such as getting the necessary information at the Business, Service and Component levels.  But we have to take into consideration the likelihood that the Cloud is underpinned by Virtualization and more specifically the use of resource pools.   Therefore in this case, we need to be aware of what Limits, Shares and Reservations are set and what VMs are running in which pools in our cluster(s).  Earlier we displayed a chart of a priority guest resource pool and the CPU usage of the guests within it.  We need to identify limits.  How much are our virtual machines allowed to use?  Do they have a ceiling?  What is the limit?  When we talk about utilization is it the utilization of a specific CPU limit for example?

Do they have higher priority shares over others?  What about any guarantees? Are they guaranteed resources because they may be high priority guests? And are there any CPU infinities assigned.

Information we need. 

       Business - how many users is the service supporting? What resources are required? Are we likely to experience growth in this service?  If so, by how much and when? Monthly, Quarterly or Annually?

       Service – have the Service Level Requirements (SLR) been defined and / or have they been agreed (SLA) and what do they entail?  What, if any, are the penalties for not meeting the terms stated in the SLA?

       Component – gather and store the configuration, performance and application data from the systems and applications hosting the service in a centralized database that can be readily and easily accessed to provide the key evidence on whether services are and will in future satisfy the requirements as stated in the SLAs.
ITIL v3 Capacity Management explains about the kind of information you should be gathering at these levels.  Having this information enables us to get the full picture of what is currently happening and allows us to forecast and plan ahead based on the business growth plans for the future.  

Once we have all of the required information, we can implement automatic reporting and alerting and that’s what I’ll be covering in the last of my series on Monday....

Jamie Baker
Principal Consultant

Friday, 3 April 2015

Virtualization underpins Cloud Computing, through resource pooling and rapid elasticity(3 of 10)

As mentioned previously to be considered a cloud a service must be On Demand, provide Resource Pooling and also Rapid Elasticity.  And whilst Virtualization in general can provide the majority of these features the difference is that a private cloud using internet based technologies can actually provide the mechanism for end users to self provision virtual systems. Think of this as the ability to self-check in at an airport or print your boarding pass from home. 
You login to a browser with a reference code, plug in some personal details and print or walk up to a screen enter a few details and voila out pops the boarding card and off you go (business or hand luggage only) otherwise off to bag drop before you go - virtually eliminating the need to queue up at a check-in desk to get your boarding card. 

But it's more than just virtualizing systems and hosting them internally, it’s about giving control to the end user.  

Now of course, some administrators may wince after reading this but there are ways in which self- provisioned systems can be controlled by using the virtualization technology that underpins cloud technology.  Using resource pools within your private cloud gives you the ability to control resources via limits and shares and / or reservations so you can specify the amount of resources that users are allowed to provision.  These control settings can also be changed very quickly to increase or decrease the amount of resources that are available within that pool.  This helps prevent over specification and VM sprawl.

Another way to control resource deployment and / or usage is to internally charge.  Users and their departments will soon reign back on creating over provisioned systems if they are charged on their system configuration usage rather than just on the usage itself.   
It can be quite difficult to implement some form of internal charging. What do you charge with?  Maybe by utilizing project codes or some other internal monetary system?
On Monday I’ll be looking at Capacity on demand and how you can get your sizing right.

Jamie Baker
Principal Consultant

Wednesday, 1 April 2015

When we refer to a "cloud" what is it that we actually mean? (2 of 10)

We know that the cloud provides computing resources for customers to use and these resources are then charged at a monetary value accordingly.
Cloud providers deliver and manage services by using applications such as VMware's vCloud Director.  These cloud applications provide benefits such as:

·         Increased business agility by empowering users to deploy pre-configured services or custom-built services with the click of a button
·         Maintaining security and control over a multi-tenant environment with policy-based user controls and security technologies

·         Reducing costs by efficiently delivering resources to internal organizations as virtual datacenters to increase consolidation and simplify management
So, to be considered a Cloud it must be:

·         On Demand - cloud aware applications can in most cases automatically self provision resources for itself and release them back as necessary. 
·         Resource Pooling - freeing up unused resources provides the ability to move these resources between different consumers’ workloads, thus quickly and effectively satisfying demand.

·         Rapid Elasticity - rapid means within seconds to minutes (not in days).  In a Virtual Cloud Environment, to Scale Out or In would also cover the ability to provision new ESX hosts, rather than just scale to new virtual machines.
Virtualization technology encompasses these three requirements and underpins Cloud Computing.

Many businesses are now using these advantages to move away from overinvestment in rigid, legacy-based technology and adopting cloud-based services which are highly scalable, technology-enabled computing consumed over a network on an as-needed basis.

Cloud Types

Cloud types provide the “computing as a service” model to help reduce IT costs and make your technology environment a responsive, agile, service-based system.  These Cloud "types" or Service Delivery Models are commonly known as:

·         Software as a Service (SaaS) - External Service Providers (ESP) such as Amazon can provide access to a specific software application, e.g. Business Mail Service Desk and charge as necessary.  You would access this application via a "thin client" typically a web browser.

·         Platform as a Service (PaaS) - This enables you to deploy supported applications into the Cloud with some degree of control over what environment settings are required.  You do not have any control over the resources provided to host these applications.

·         Infrastructure as a Service (IaaS) - This provides the ability to provision your own resources and you have full control over what operating systems, environment settings and applications are deployed.  The cloud provider still retains full management and control over the infrastructure.

The Cloud or "Clouds" as we know them are categorized by location and ownership, typically referred to as Public / Private or Internal or External clouds.  In addition there are Community and Hybrid clouds whereby a Community share the cloud and are bound by a common concern or interest and Hybrid where you have a composition of two or more Private or Public clouds.  This allows for data and application portability between clouds to occur.  VMware introduced the vApps functionality specifically for this.

Most organisations will tend to lean more towards having exclusive "Internal" cloud services and possibly "Hybrid" cloud services (a mixture of Public and Private clouds).  You may find that critical or data sensitive applications are always kept within the organisation and in some cases Testing and Development applications are ported to the Public Cloud where it is more cost effective to do so.  There may also be the use of SaaS within the organisation which would be external to the business.

Just to reiterate, Virtualization underpins Cloud Computing, through resource pooling and rapid elasticity and to avoid any confusion, I will be explaining the primary difference of say a Private or Internal Cloud over Virtualization on Friday.
In the meantime register for my next webinar 'Understanding Vmware Capacity' http://www.metron-athene.com/services/training/webinars/index.html


Jamie Baker
Principal Consultant

Monday, 30 March 2015

VMware vSphere – avoiding an Internal Storm (1 of 10)

Traditionally, within the Distributed Computing world single or multiple applications would be hosted on single physical servers, each with an operating system (typically Windows or UNIX/Linux).  Then Virtualization was reborn into the x86 environment (note to my Mainframe friends that we know Virtualization was first born in the zOS environment) that allowed for multiple "virtual systems" to be hosted on a single physical server by using hypervisor software.  As virtualization software developed further, notably by VMware who are currently the market leader in x86 virtualization technology, we are now able to cluster virtual systems together to create shared pools of resources across the virtual infrastructure.

Why is this important? 

Virtualization underpins Cloud Computing by presenting and controlling computing resources to users (or clients) by these shared pools of resources (Resource Pools).  However, it is not just the ability to provide resources and control usage, Virtualization also provides two key components of Cloud Computing:

·         Autonomic Computing
·         Utility Computing
vSphere incorporates Autonomic Computing by automating the control of functioning computer applications and systems.  Using vMotion and DRS, it can automate the migration of virtual machines to alternate ESX hosts within the same cluster, if a specific ESX host becomes unbalanced due to excessive resource demand on that host.

Utility Computing allows Cloud providers to provision computing resources and infrastructure to customers and charge them for their specific usage or chosen configuration at a flat rate.

In this series I’ll be looking at VMware vSphere, how it underpins Cloud Computing and how you can use it to best advantage. I’ll start by examining the definition of Cloud on Wednesday...

Jamie Baker
Principal Consultant