Wednesday, 3 December 2014

I’m an old mainframer, I admit it - Highlights from Guide Share Europe Conference

I was in Systems Programming for the first 20 years of my working life and I can’t quite let go of that even though I’ve been working in Capacity Management for almost as long.  I was delighted to be able to attend the recent Guide Share Europe conference at the beginning of November, at Whittlebury Hall in Northamptonshire.  Metron tries to get there as often as possible to keep our company’s name and offerings in the consciousness of the mainframe community.  For several years now there’s been an air of almost sadness about the place.  This year, what a difference.  The exhibition hall was humming with vendor stalls well laid out around two adjacent rooms that encouraged people to move around, mingle, chat, eat and drink without feeling on top of each other.


Many streams of sessions were available, covering a wide range of topics.  There were “101” classes for people new to the mainframe world, “what’s new” sessions for the features and facilities of recent announcements, there were round-table discussions, technical workshops and personal development sessions.


Always of great interest is the session with Mark Anzani of IBM, despite being at 8am the morning after the night before – if you’ve ever been to a GSE conference dinner you’ll know what I mean.  Mark has the marvellous title of “VP, System z Strategy, Resilience and Ecosystem”.  His sessions are always full to the doors, and it’s not just the bacon baps and strong coffee that draw people in.  This year didn’t disappoint.  The direction the System z hardware and software is taking is (as it has been for so long that people forget) quietly revolutionary.  Ideas about quantum computing, neuro-synaptic chips and nano-photonics on the hardware side were complemented by software developments – tamper-proof processing, self-healing, self-optimizing and an accelerated push to CAMS – Cloud, Analytics, Mobile and Security.


On the capacity management side, as previously signalled, the introduction of “hyperthreading” for System z processors came up – and this time it was more than just aspiration. IBM say they now have machines running in labs that can do this, but they aren’t in a hurry to release them until they fully understand the implications and benefits.  It’ll probably be a year or two before they come to market.  Why is this happening?  For the simple reason that Intel and other chip manufacturers have gone that way – the speed you can make chips run at tops out as you try to balance the power needed to drive them with the heat it generates, and the speed at which electrons can move around circuits.  Top end System z processors already run at a screaming 5.5 GHz and that isn’t going to get much faster, if at all.  The alternative to going faster is to go wider – that is, do more in each clock tick, or allow other work to run where a thread of work is stalled.  The ability to interleave threads of work on chips means that throughput will be improved, and it’s vital to grasp that concept correctly.  Multiple threads of work on a single core lets you get more work done in a given time – it does not make the core faster. Initially there will be two threads per core, but this will rise with successive newer machines.
Work like lightweight Java programs, maybe Linux systems running on an IFL could be happy in this environment, but I started to wonder just how traditional heavy batch jobs or CICS systems that love big engines will react.  Perhaps there will be an option to pool processors together that are “hyperthreaded” and some that are not, and work will be directed to each pool as appropriate.
IBM will need to include decent instrumentation so performance and capacity people can keep an eye on the physical usage as well as any logical or virtual usage.  Almost no other operating systems provide this easily – not Windows, not Linux, not Solaris, not HP-UX.  All of them report the utilization of processors as the operating system sees them, not as the underlying hardware is actually being used.  That’s why athene® has its “Core Estimated Utilization” metrics alongside the “Reported Utilization” ones, to provide a view into that mostly invisible information. http://www.metron-athene.com/products/athene_ES1/index.html
IBM does a great job with AIX on System p and OS/400 on iSeries machine by giving you the logical, virtual and physical information about processor activity - let’s hope they continue that into System z and RMF.
Whatever is done, sites or businesses that rely on measures like CPU seconds for billing will need to review and possibly change their accounting software to make sure they continue to provide consistent metering of services. Customers of such facilities will need to check their bills carefully to make sure they aren’t being charged a “logical CPU” cost. 
System z is still recognizably the child of the System/360 machines that came into the world in 1964.  Here’s to ever more amazing changes in the next half a century.
Nick Varley
Chief Services Officer

No comments:

Post a Comment