To produce and analyze the recommended performance reports we need to
capture and store performance data. This
process is normally performed by installing an agent responsible for running
UNIX/Linux system tools such as SAR, VMSTAT and IOSTAT to capture information
or running (potentially intrusive) kernel commands. As with any data capture whether it is local
or remote you’d expect to incur some overhead.
Typically agents should incur no more than 1% CPU usage when capturing
data, however as mentioned some agents may incur more.
In addition when capturing data, can you rely on what it is reporting? Remember this is software and software can
contain bugs. But you say ”we have to
rely on what the operating system gives us”,
and this is true to some extent.
From my experience there are several tools to provide this information
within the UNIX operating system – some are accurate and some are not.
For example:
Does your Linux system have the sysstat package installed and is it an
accurate and reliable version? Or in Solaris
Containers are the Resident Set Size (RSS) values being incorrectly reported due
to a double counting of memory pages? An
example of this is shown below.Zone Memory Reporting
This report is an interesting one.
It shows the amount of RSS memory per zone against the Total Memory
Capacity of the underlying server (Red).
Because RSS values are double counted the sum of RSS for each zone far
exceeds the actual Physical Memory Capacity. I'll be looking at Linux differences next.
Jamie Baker
Principal Consultant
No comments:
Post a Comment