Please find current release of this HowTo at the Cacti Documentation Site at http://docs.cacti.net/node/283
Cacti users sometimes complain about NaN's in their graphs. Unfortunately, there are several reasons for this result. The following is a step-by-step procedure I recommend for debugging this
To debug the NaN's:
1. Check Cacti Log File
Please have a look at your cacti log file. Usually, you'll find it at <path_cacti>/log/cacti.log. Else see "Settings -> Paths". Check for this kind of error:
CACTID: Host[...] DS[....] WARNING: SNMP timeout detected [500 ms], ignoring host '........'
For "reasonable" timeouts, this may be related to a snmpbulkwalk issue. To change this, see "Settings -> Poller" and lower the value for The Maximum SNMP OID's Per SNMP Get Request
. Start at a value of 1 and increase it again, if the poller starts working. Some agent's don't have the horsepower to deliver that many OID's at a time. Therefore, we can reduce the number for those older/underpowered devices.2. Check Basic Data Gathering:
For scripts, run them as cactiuser from cli to check basic functionality. E.g. for a perl script named your-perl-script.pl with parameters "p1 p2" under *nix this would look like:
su - cactiuser
/full/path/to/perl your-perl-script.pl p1 p2
... (check output)
For snmp, snmpget the _exact_ OID you're asking for, using same community string and snmp version as defined within cacti. For an OID of .126.96.36.199.4.something, community string of "very-secret" and version 2 for target host "target-host" this would look like
snmpget -c very-secret -v 2c target-host .188.8.131.52.4.something3. Check cacti's poller:
.... (check output)
First, note the poller you're using (from crontab, it's _always_ poller.php that's executed. But you may configure cmd.php _or_ cactid from "Settings")
Now, clear ./log/cacti.log (or rename it to get a fresh start)
Then, change "Settings -> Poller Logging Level" to DEBUG for _one_ polling cycle. You may rename this log as well to avoid more stuff added to it with subsequent polling cycles.
Now, find the host/data source in question. The Host[<id>]
is given numerically, the <id> being a specific number for that host. Find this <id> from the Devices
menue when editing the host: The url contains a string like &id=<id>
Check, whether the output is as expected. If not, check your script (e.g. /full/path/to/perl). If ok, proceed to next step4. Check MySQL updating
In most cases, this step make be skipped. You may want to return to this step, if the next one fails (e.g. no rrdtool update to be found)
From debug log, please find the MySQL update statement for that host concerning table poller_output
. On very rare occasions, this will fail. So please copy that sql statement and paste it to a mysql session started from cli. This may as well be done from some tool like phpmyadmin. Check the sql return code.5. Check rrd file updating
Down in the same log, you should find some
rrdtool update <filename> --template ...
You should find exactly one update statement for each file.
RRD files should be created by the poller. If it does not create them, it will not fill them either. If it does, please check your Poller Cache
from Utilities and search for your target. Does the query show up here?6. Check rrd file numbers
You're perhaps wondering about this step, if the former was ok. But due to data sources MINIMUM and MAXIMUM definitions, it is possible, that valid updates for rrd files are suppressed, because MINIMUM was not reached or MAXIMUM was exceeded.
Assuming, you've got some valid rrdtool update
in step 3, perform a
rrdtool fetch <rrd file> AVERAGE
and look at the last 10-20 lines. If you find NaN's there, perform
rrdtool info <rrd file>
and check the ds[...].min
ds[loss].min = 0.0000000000e+00
ds[loss].max = 1.0000000000e+02
In this example, MINIMUM = 0 and MAXIMUM = 100. For a ds.[...].type=GAUGE
verify, that e.g. the number returned by the script does not exceed ds[...].MAX
(same holds for MINIMUM, respectively).
If you run into this, please do not only update the data source definition within the Data Template, but perform a
rrdtool tune <rrd file> --maximum <ds-name>:<new ds maximum>
for all existing rrd files belonging to that Data Template.7. Check rrdtool graph statement
Last resort would be to check, that the corract data sources are used. Goto Graph Management
and select your Graph. Enable DEBUG Mode to find the whole rrdtool graph
statement. You should notice the DEF
statements. They specify the rrd file and data source to be used. You may check, that all of them are as wanted.Miscellaneous
Up to current cacti 0.8.6h, table poller_output
may increase beyond reasonable size. This is commonly due to php.ini's memory settings of 8MB default. Change this to at least 64 MB.
To check this, please run follwoing sql from mysql cli (or phpmyadmin or the like)
select count(*) from poller_output;
. If the result is huge, you may get rid of those stuff by
truncate table poller_output;
As of current SVN code for upcoming cacti 0.9, I saw measures were taken on both issues (memory size, truncating poller_output).RPM Installation?
Most rpm installations will setup the crontab entry now. If you've followed the installation instructions to the letter (which you should always do
), you may now have two poller running. That's not a good thing, though. Most rpm installations will setup cron in /etc/cron.d/cacti.
Now, please check all your crontabs, especially /etc/crontab and crontabs of users root and cactiuser. Leave only one poller entry for all of them. Personally, I've chosen /etc/cron.d/cacti to avoid problems when updating rpm's. Mosten often, you won't remember this item when updating lots of rpm's, so I felt more secure to put it here. And I've made some slight modifications, see
*/5 * * * * cactiuser /usr/bin/php /var/www/html/cacti/poller.php > /var/local/log/poller.log 2>&1
This will produce a file /var/local/log/poller.log, which includes some additionals information from each poller's run, such as rrdtool errors. It occupies only some few bytes and will be overwritten each time.
Please comment, if these instructions may be difficult to understand or to follow. If you find other aspects worth to be checked, I'd like to hear from you, too.
Added new chapter 3 on MySQL debugging
Added new chapter on "Miscellaneous" stuff
Added new chapter on "RPM Installation" for crontab related issues
Added new chapter on Max OID get requests
, courtesy http://forums.cacti.net/viewtopic.php?t=17839