Monday, 23 January 2012

vSphere5 vs Hyper-V Performance Showdown

It's time to stop guessing and start testing.

vSphere 5 is the most popular x86-virtualization platform, and exciting enhancements keep coming with each new release.

Hyper-V from Microsoft is also a popular solution for server virtualization on the x86 platform, and it has become even more so with the addition of advanced features in service pack one.
The fact that Hyper-V is included with Windows, also make it attractive from a cost perspective.

Understanding the performance aspects of these virtual environments is important to ensuring that you get maximum benefit from your virtualization investments.

The usual way to test performance between platforms is through benchmarking. However, benchmarking a virtual environment presents challenges that do not exist when benchmarking non-virtualized systems.

Many additional variables are introduced related to caching, mapped IO, workload mixing, etc... The added complexity can make it difficult to get at the truth, and make it easier to hide the truth. The ideal benchmark would incorporate real production workloads, which in most cases is not feasible.

An alternative is to utilize generic benchmarks that approximate production workloads.

We're running a free to attend webinar session where we'll be making a general performance comparison between vSphere and Hyper-V, across all major components using identical generic benchmarking tools. Virtualization specific performance metrics available in both environments will also be examined.

A basic understanding of the hypervisor architecture is important when evaluating performance data. Architectures will be compared and related to available metrics and important differences in architecture and terminology will be discussed.
Benchmark results from both environments will be presented, along with conclusions that can be made from the findings.

We'll be discussing what may be lacking and we'll be covering :

  • Architecture review
  • Metrics available
  • Challenges of benchmarking virtual environments
  • Testing environment and benchmarks
  • Methods and objectives
  • Results and conclusions 

  • This session will be followed in February by athene® Live! and will feature the actual benchmarking, analyzed with athene®. As always your feedback from the webinar will be directly incorporated in to the athene® Live! session.

    Why not join us on January 26 as we compare two of the most popular x86-virtualization platforms in use. Register for this Webinar

    Also look out for a new blog series on Capacity Management starting this Wednesday. 

    Adam Grummitt, Metron Distinguished Engineer,  winner of the prestigious Michelson award and author of 'Capacity Management: A Practitioner Guide ' 
    will be considering all practical aspects involved in trying to make capacity management effective.
    Rich Fronheiser
    VP, Strategic Marketing

    Friday, 20 January 2012

    UNIX Acquire for Process Accounting - the results


    We set out to see if running UNIX Acquire for Process Accounting actually has a high or low overhead on the UNIX host system and as promised the results are below:

    The results

    On AIX there was no measurable difference between the average elapsed time of the script with or without accounting enabled.  For each of the 15 times the script was run, it took 58 seconds of elapsed time, and the command termination rate therefore remained constant at around 172 per second.

    The following table shows some of the key system measurements taken at various points:


    From 15:00 to 15:18 and from 15:48 onwards are the times when the tests were not being run.
    From 15:20 to 15:25 is when the test one was being run with no accounting active.
    From 15:28 to 15:36 is when the test two was run with Process Accounting active.
    From 15:38 to 15:47 is when the test three was run with both Process Accounting and Advanced Accounting active.
    Of note are the following observations:
    ·         Not surprisingly the CPU busy jumps from 5% before and after to 60+% during all three tests;  this work is being run on an AIX system in an LPAR, so these numbers are only a logical view of the CPU needed to run the work.  Looking at the Number of Physical Processors Consumed metric shows that the second and third tests do not show any material difference in CPU requirements from the first test
    ·         The number of characters written/second increases from about 2,500 during test one, without accounting running, to about 25,000 for tests two and three; this is of course a ten-fold increase, but only from 2.5 KB/second to 25 KB/second.  This extra rate of data writing would hardly be noticeable on all but the poorest performing disks
    ·         The physical disk I/O rate does not change noticeably throughout the whole testing period.  These numbers are therefore inconclusive in terms of showing an effect on the disk subsystem of any significant additional load from either Process Accounting and/or Advanced Accounting.  The average response times on the single disk on this system were as follows:
    Millesconds
    Item
    17.9
    Overall average
    17.8
    Before testing
    21.6
    No accounting
    15.7
    PACCT only
    16.7
    PACCT and AACT
    18.3
    After testing

    The PACCT file was reset to null before the testing began.  At the end of the tests two and three (where accounting was running) the PACCT file had grown to 5.8 MB, so for each test it required something like 2.9 MB to store data about 50,000 terminating commands.  The Advanced Accounting file was 10 MB in size and at the end of the final test had utilized about 9MB or 90% of that space.
    Given that at least 50,000 commands had terminated in each test, this does not seem an unreasonable amount of disk storage to occupy in order to obtain greater detail about the work running on a machine.
    It is of course necessary to provide management of the PACCT file to avoid running out of space in a file system, and there are simple commands and techniques to allow this to occur.  In addition, Metron’s Acquire utility does not require much in the way of a backwards view of commands, simply needing to see data timed at the start of a command in order to pick it up, at each capture interval.
    For example, with a 15 minute capture interval, Acquire will request from the accounting system the details of all commands that have completed in the previous 15 minutes.  If the PACCT file has been “rotated”, e.g. from pacct to pacct.1, and a new pacct file started, Acquire accesses the previous file through a link in the Metron.save.d directory.  Therefore, providing the “old pacct” file is still present, (i.e. not deleted or zipped up), Acquire will pick up data for it.
    This means that pacct files can be managed to avoid continual growth in the occupancy of disk space whilst not losing the detail of completed commands.

    Other techniques such as placing the pacct file in its own file system can help to isolate problems should active management of the accounting data fail for some reason.
    Conclusion
    In conclusion the general belief that process accounting places a 10-15% overhead on a UNIX system cannot be seen to be true in the sense that there is no observable CPU overhead.  Neither is there a particular overhead associated with the writing of the accounting data.
    The overhead of process accounting appears to be very low.
     
    Nick Varley
    Principal Consultant

    

    Wednesday, 18 January 2012

    UNIX Process Accounting

    There is a general and deep-rooted belief that UNIX Process Accounting causes a significant overhead to a system. We set out to see if this was true and found that running UNIX Acquire for Process Accounting actually has a low overhead on the UNIX host system.

    It may be of interest to understand that in UNIX and Linux the operating system is doing all the capture of the metrics for accounting all the time, and the only part not done when accounting is switched off is the writing of the account record to disk.  Running accounting is very valuable in order to obtain workload-level data with which to build athene® Planner models, or to break out usage of a system by services or users.  It is also the only source of data that splits out system CPU time from user CPU time, or provide a view on the I/O load from a given user or command.

    The Benchmark

    A Perl script was used to run a benchmark. In essence this script repeatedly creates a new process to run the “uname –a” command. The process creation occurs within a loop structure which means that it can be called for n iterations before the script terminates. On a partition of a pSeries p720 machine configured with 2 x 3 GHz Power 7 processors and running AIX v7.1, 10000 iterations took about 58 seconds to complete on an otherwise empty machine. This represents a command termination rate of over 172 per second.

    One major design requirement was to use a command that should not cause or require disk activity, so that any increase that was observed across the tests would be a reflection of the disk I/Os generated directly by the system’s accounting file handling routines. 

    The script was run on a quiet machine to ensure that the conditions both with and without process accounting enabled were the same.

    The script was run 5 times for each of the three tests, as noted further on, all on an otherwise quiet machine. First the script was run with process accounting switched off (i.e. no pacct file). The benchmark was then repeated with process accounting switched on (i.e. with a growing pacct file), and then with adding AIX’s Advanced Accounting as well as process accounting.

    You can see the results this Friday…………

    Nick Varley
    Principal Consultant


    Tuesday, 3 January 2012

    Happy New Year from Paul Malton – CEO


    Welcome back after the break.

    As a company we are looking forward to the year ahead with enthusiasm. We have a number of exciting new initiatives under way and look forward to releasing these as the year progresses, to the benefit of both our existing users and new customers.

    While we all hope the New Year sees the clouds that have dominated the global financial skyline dissipate soon, Clouds of another sort will increasingly dominate the IT infrastructure world. 

    In an increasingly complex environment for our clients, featuring unique combinations of public, private and hybrid Cloud implementations, Metron sees flexibility as vital. 

    For this reason Metron has implemented a strategy centred on our Athene software to provide effective capacity management for Cloud systems. 

    The strategy combines Athene traditional capacity management facilities with Correlsense’s Sharepath software and its end to end view of transaction performance, plus Athene’s Integrator ‘Capture Packs’ to get performance and capacity date from those ‘hard to reach’ corners of the Cloud.  This combination gives the flexibility to analyse, report and predict capacity across the Cloud given the vagaries of what data might be available for any given implementation.

    I wish all our staff, customers and contacts a great New Year 2012.



    Paul Malton
    Chairman and CEO