Monday 31 March 2014

Data Sources ( 6 of 15 Data, data everywhere and not a bit to use)

As I said last week, putting aside the hardware utilization data as a given, there are a number of other high value pieces of data that we can capture and use to understand the service.

Application/Service Data

Typically when I’m looking at an application I want to know a few simple bits of information.  How many work units, what type were they, how long did it take to do them.  Nice extras will be things like internal limits, and poor performance indicators (deadlocks for instance).

Databases - These usually have very good instrumentation, and there are usually many agents that can be pulled into service to get the data.

     Well thought out APIs or Windows Counters
     Well thought out Agents do this


SAP - SAP and the like also tend to have a considered way of getting useful data out of them about the workloads, and internal performance.  ie: with SAP there are specific transactions like ST03 which can be run to return data. (CPU, transactions, database changes etc)

     Various transactions return Perf data (e.g. ST03)

What if there is no designed interface? - Not everything has been designed with a clear way for you to get data about its workload & performance.  Then you have to start looking at what logging is available and how you are going to process that to get the data you want.

     Logs, databases, write your own instrumentation
     APM Tools

A good APM tool will give us a LOT of useful information about the workload in our environment and I'll take a look at the benefits and difficulties on Wednesday.

In the meantime don't forget to sign up to our Community, it's free and gives you access to white papers, downloads, on-demand webinars and more.
http://www.metron-athene.com/_downloads/index.html

Phil Bell
Consultant



Friday 28 March 2014

Capture Techniques (5 of 15 Data,data everywhere and not a bit to use

Having said we need to capture everything as it happens, the next question is how do we go about doing that?

There are 2 main ways to capture data Agentless and Agent and there are pros and cons for each:

      Agentless (SNMP, WMI, etc)

     Is subject to more security issues, and network quality. 
     Broken communication = lost data
     Easier/Faster implementation
     (often), Less data of lower quality

      Agent Based

     Autonomous
      Data collected by a local process.  If the server is up, data capture is running.
      Broken communication = catch up later
     Possibility to use existing Agents
     Overhead (system and human)


In reality most of us end up using a blended delivery model, deploying a solution that has a bit of everything in it, because the world is a complex place.

Putting aside the hardware utilization data as a given, there are a number of other high value pieces of data that we can capture and use to understand the service. With this in mind I'll be looking at Data Sources next week.

Phil Bell
Consultant

Wednesday 26 March 2014

Data Capture (4 of 15 Data, data everywhere and not a bit to use)

When it comes to capturing data my basic principles are pretty simple:

More is (just this once), more - at data capture time get everything you will need. You cannot go back in time and decide that you’d really like to have a different bit of data.  If you do not capture the data you need at the time then it’s lost forever.

Quality data – You need quality data, Garbage in is going to equal Garbage out.

Make sure the data is in your control - You need to have control of the data so that you can decide what is kept long term, what the aggregation is and so on.

Make sure you are covering everything in the service - We’re offering best value when we can manage the capacity of the service, not just the servers. So make sure when capturing data that you include:

     Resource
     Application
     Network
     SAN

     Business data

Having said we need to capture everything as it happens.  The next question is How? So I'll be looking at capture techniques on Friday.

In the meantime why not take a look at how athene® captures data
http://www.metron-athene.com/products/athene/datacapture/agent/index.html and our selection of Capture Packs
http://www.metron-athene.com/products/athene/datacapture/capture-packs/index.html

Phil Bell
Consultant

Monday 24 March 2014

Where most people are (3 of 15 Data, data everywhere and not a bit to use)

People I meet generally have a handful of tools designed to do something other than capacity planning.  

Usually real time monitors or very basic performance logging tools designed for sys admin roles and usually teams don’t like to let other people play with their tools, so no single person has access to everything.

The reality is that while the business is very keen to see utilization of IT correlated with business metrics, getting the business to supply them or make them available is usually pretty futile.

So we end up knowing there are huge amounts of data out there that are out of our reach for no apparent reason.

Then there is the quality of the data.  

It’s coming from all sorts of places and you have to try and tie it all together.  Somewhere along the line is something the last guy threw together with a bit of perl/vb/excel/Access that no-one understands anymore.

Add to that a limited number of employees because there are rarely enough people to do anything new.

So that's where we are....

On Wednesday I'll be sharing my basic principles for data capture with you.

In the meantime sign up for our community and get access to our white papers, on-demand webinars, performace tips and more 
http://www.metron-athene.com/_downloads/index.html

Phil Bell
Consultant


Wednesday 19 March 2014

Business Demands (2 of 15 Data, data everywhere and not a bit to use)

Being asked to import everything ever recorded, leads to quite a lot of frustration. 














Not too far down the line you’ll find someone who says “sorry but you can’t do that because of security/data laws etc”.
We have to find ways to break through that barrier.

Business Demands

So what are businesses asking for?

I’m going to try and set the scene a little so you can understand what the world looks like from my position(and possibly yours as well).

The business would like to: 

•      Have data for everything
–    Internal to a system
–    Across all infrastructure (build a service picture)
–    Business volumes & transaction response times

•      Not deploy more agents
•      Ensure reliable data
•      Use minimal Storage
•      Add no extra staff

The business would like to see data for pretty much everything it’s possible to measure.  The utilization of the system, across everything, and with meaningful volumes of data and workloads correlated with that.

And we’re to do that without deploying any more agents than already exist.  While ensuring the data is totally reliable, without using any storage, and in no circumstances will there be any budget for more staff to do this.....and you’ll have to show the savings first before being allowed to spend some of that saving.

All those demands would be ok in a world where most of that was in place.  But is it in place already?  

Not usually.

Join me again on Friday when I'll be looking at where most people are in this process, in the meantime join our community and get access to white papers, on-demand webinars and more http://www.metron-athene.com/_downloads/index.html

Phil Bell
Consultant

Monday 17 March 2014

Data, data everywhere and not a bit to use(1 of 15)

With the arrival of the cloud, and business focus on service based reporting, capturing data has never been more important.

This series will discuss the challenges of capturing the sorts of data required, to answer the demands of the business.

This is not a series about Big Data, but about data in general.

Why talk about data?

Dashboard, Dashboard, Dashboard - At events there is a lot of discussion about dashboards.  Always talking about the best way to display data and drill down this and RAG status that.

Alerts, Automation & CMIS - We all talk about alerts and automation and having a CMIS to pull all the data from. 

And it’s all brilliant stuff.  But often leaves out one big question. 

What does this all sit on?


Good quality data capture - It’s not exciting, there is very little glamour in it, but it has to be done, and it has to be done right.  In my experience we are being asked not just to capture our own data, but we’re getting an ever increasing number of requests to take data from an ever increasing number of sources.

In this series I'll be looking at business demands, basic principles, data capture techniques, data sources and more so catch me again on Wednesday.

Phil Bell
Consultant

Tuesday 11 March 2014

5 Key Capacity and Performance Concerns for Unix/Linux

The history of Unix is long and the path it took from an academic programmer’s sandbox to commercial workhorse was winding at times. 

That said, it goes without saying that Unix and Linux today are key pieces of just about every company’s IT infrastructure, providing key services to internal and external clients.

Unix certainly hasn’t stood still since its founding in 1969. Today’s Unix (of varying flavors) can be very complex. Virtualization technologies exist in just about every Unix variant (for example AIX LPARS and Solaris Containers) and those certainly provide challenges to the administrator and to the capacity and performance manager.

Metron has been providing software and services in the area of Unix Capacity Management since the 1980s and has been there during the shift (in many companies) from total reliance on the mainframe to an increased reliance on Unix computing as the technology and the resilience/reliability improved.

Linux, originally released in 1991 as a free/open source Unix-like operating system, still seems like the “new kid in town” to the experienced IT professional. But that shouldn’t fool anyone into thinking that Linux isn’t providing key IT services in many companies. The relatively inexpensive total cost of ownership means that Linux is being used in varied ways - from stand-alone servers, to virtualized systems in VMware DataCenters, to uses in Big Data implementations. 

Managing the performance and capacity of Linux is a crucial part of an overall Capacity and Performance Management strategy.

Join me at our free webinar where I'll be looking at 5 of the key capacity and performance concerns for Unix and Linux. 

In my session I'll be covering:
  • How Unix/Linux fit into today’s DataCenter
  • Performance Concerns for Unix
  • How virtualization affects how we manage Unix environments
  • Looking at Linux…do we manage it differently from Unix?
  • New technologies (Cloud/Big Data, etc.) using Unix/Linux
It's taking place on March 19th, so there's still time to register 
http://www.metron-athene.com/services/training/webinars/index.html


Hope to see you there

Jamie Baker
Principal Consultant

Wednesday 5 March 2014

Using Systems Capacity Data for Business Intelligence (9 of 9)

As a supplier of Capacity Management software and services, we often get asked to help with justifying investment in this area.

A Return On Investment (ROI) model can be used to confirm that planned value is being realized by your customers.

ROI’s are of particular importance nowadays as they determine the value that is going to be provided and how long it will take to reap financial benefits.
I hope you've found my series informative and I’ll leave you with a few final thoughts.

Business intelligence for capacity management does not need to involve special data warehouses, ETL operations, or OLAP cubes. Transforming capacity data into useful information for decision making can be easy, with the right processes and software.
Without a solid plan, buy-in from management, and a well-defined process, no amount of software will deliver the value everyone hopes for.

Effectively managing capacity is intelligent business.

Dale Feiste
Consultant

Monday 3 March 2014

Using Systems Capacity Data for Business Intelligence - Implementation (8 of 9)

Now that we have our requirements the last step is implementation and deliverables, which I’ll take a look at today.

Implementation
Determine what software and services are needed
Of course you will not be surprised that I have selected athene® as my software. Its multi-platform capabilities and Integrator feature (which will allow me to import any time-series numeric data from “hard to reach” data sources)will provide me with all the data I need for reporting, trending, alerting and modelling purposes.
 Formally define deliverables
                                                                      Dashboard                         
                                                                       

Analysis

Advice



                                                                     Virtualization



                                                                         Trending


Business


Modeling        



Modeling        




The next step will be to:
       Get the software solution implemented
       Architect deliverables with selected software and services
       Automate as much of the process as possible

..............and check that what you have architected is actually working.
The final part of my blog series on Friday will take a brief look at Return on Investment and I’ll share  some final thoughts with you.

Dale Feiste
Consultant