explain me a difference of how MRTG measures incoming data

1.5k Views Asked by At

Everyone knows that MRTG needs at least one value to be passed on it's input.

In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.

Lets look at the elementary, yet popular example :

We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.

We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.

So as i can imagine there could be two types of possible statistics: 1. How fast this value changes upon the time interval. In oher words - activity. 2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)

First graphic will be saltatory (activity). Second will just grow up every time.

I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.

I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?

Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?

Thanks in advance.

1

There are 1 best solutions below

2
On

MRTG assumes that everything is being measured as a rate (even if it isnt a rate)

Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.

Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.

Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)

Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.

All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.

For information on this, see Alex van der Bogaerdt's excellent tutorial