Firstly,
we want to make the most efficient use of the cloud as we can whilst wholly
satisfying our SLA’s. We need to control
the resource usage by employing shares, limits and if needed reservations at
either the Resource Pool level or at the Virtual Machine level. Employing a chargeback model can also control
the usage and also keep in check provisioning of unnecessary virtual machines
(VM Sprawl).
Continuously monitor and tune the virtual infrastructure. This will bring the following benefits:
· Freeing up unused resources
· Ensuring that capacity is not under or over
· Identifying workloads for consolidation – including Idle Virtual Machines
· Load balancing Virtual Machines across hosts
· Identifying what and when ESX hosts can be powered down
Ultimately we are striving for the equilibrium between Cost and Service Impact. If we attempt to reduce costs too far it can have a significant impact on service. Moreover, if we overspend, the impact on the service may be mitigated but we do not get value for money. If you have the correct processes and tools in place, getting the right balance should be easily achievable.
Automation and Control
VMware provides the ability to provision VMs rapidly by using templates. These templates can be created for vanilla domain approved operating system builds. Further to this virtual machines can be automatically migrated using Distributed Resource Scheduler (DRS) or manually between hosts or based on rules set within the DRS Cluster. DRS can be enabled on a Cluster. This functionality allows for automated, partially automated or manual load balancing of the DRS cluster based on internal algorithms which determine if any one cluster member (ESX) is struggling to meet its demands from its guests and can either migrate or recommend migration options to other members of the cluster.
At the ESX host level, rapid elasticity can be performed if required by referencing a Golden Host when configuring the new server. This reduces setup time considerably and can be on the network and operational within an hour.
We can control what resources a virtual machine has access to, whether it be at a Cluster, Host or Virtual Machine level. Priorities for CPU and Memory access are controlled by shares, which only come into force if there is contention on the parent ESX host. CPU and Memory limits can be set to control the maximum amount a virtual machine can potentially have access to, preventing any one virtual machine from hogging resources.
Use Resource Pools to structure your virtual infrastructure around either a service catalogue, department or geographical location.
On Friday I’ll take you through using Affinities to get the best resource usage balance across the cluster members
Jamie Baker
Principal Consultant
Continuously monitor and tune the virtual infrastructure. This will bring the following benefits:
· Freeing up unused resources
· Ensuring that capacity is not under or over
· Identifying workloads for consolidation – including Idle Virtual Machines
· Load balancing Virtual Machines across hosts
· Identifying what and when ESX hosts can be powered down
Ultimately we are striving for the equilibrium between Cost and Service Impact. If we attempt to reduce costs too far it can have a significant impact on service. Moreover, if we overspend, the impact on the service may be mitigated but we do not get value for money. If you have the correct processes and tools in place, getting the right balance should be easily achievable.
Automation and Control
VMware provides the ability to provision VMs rapidly by using templates. These templates can be created for vanilla domain approved operating system builds. Further to this virtual machines can be automatically migrated using Distributed Resource Scheduler (DRS) or manually between hosts or based on rules set within the DRS Cluster. DRS can be enabled on a Cluster. This functionality allows for automated, partially automated or manual load balancing of the DRS cluster based on internal algorithms which determine if any one cluster member (ESX) is struggling to meet its demands from its guests and can either migrate or recommend migration options to other members of the cluster.
At the ESX host level, rapid elasticity can be performed if required by referencing a Golden Host when configuring the new server. This reduces setup time considerably and can be on the network and operational within an hour.
We can control what resources a virtual machine has access to, whether it be at a Cluster, Host or Virtual Machine level. Priorities for CPU and Memory access are controlled by shares, which only come into force if there is contention on the parent ESX host. CPU and Memory limits can be set to control the maximum amount a virtual machine can potentially have access to, preventing any one virtual machine from hogging resources.
Use Resource Pools to structure your virtual infrastructure around either a service catalogue, department or geographical location.
On Friday I’ll take you through using Affinities to get the best resource usage balance across the cluster members
Jamie Baker
Principal Consultant
No comments:
Post a Comment