Cacti (home)ForumsDocumentation
Cacti: offical forums and support
It is currently Fri Mar 22, 2019 11:34 am

All times are UTC - 5 hours




Post new topic Reply to topic  [ 3 posts ] 
Author Message
 Post subject: Cacti grqaph problem
PostPosted: Sun Dec 11, 2011 4:37 am 
Offline

Joined: Sun Dec 11, 2011 4:20 am
Posts: 2
Good morning,
I've a problem with some traffic graph on a cisco router 7200 and cisco catalyst 6500.
If I do a "sh interface" command I can see all normal on traffic counter, but cacti graphs shows me this:

Attachment:
graph_image.php.png
graph_image.php.png [ 40.19 KiB | Viewed 829 times ]


like if there was no more traffic inside on interface.
----------------------------
RRDTool Command:

/usr/bin/rrdtool graph - \
--imgformat=PNG \
--start=-86400 \
--end=-300 \
--title='Router TNNet - Traffic - Verso Router TNNet' \
--rigid \
--base=1000 \
--height=120 \
--width=500 \
--alt-autoscale-max \
--lower-limit=0 \
--vertical-label='bits per second' \
--slope-mode \
--font TITLE:10: \
--font AXIS:7: \
--font LEGEND:8: \
--font UNIT:7: \
DEF:a="/usr/share/cacti/rra/router_tnnet_traffic_in_8.rrd":traffic_in:AVERAGE \
DEF:b="/usr/share/cacti/rra/router_tnnet_traffic_in_8.rrd":traffic_out:AVERAGE \
CDEF:cdefa=a,8,* \
CDEF:cdeff=b,8,* \
AREA:cdefa#00CF00FF:"Inbound" \
GPRINT:cdefa:LAST:" Current\:%8.2lf %s" \
GPRINT:cdefa:AVERAGE:"Average\:%8.2lf %s" \
GPRINT:cdefa:MAX:"Maximum\:%8.2lf %s\n" \
COMMENT:"Total In\: 690.36 GB\n" \
LINE1:cdeff#002A97FF:"Outbound" \
GPRINT:cdeff:LAST:"Current\:%8.2lf %s" \
GPRINT:cdeff:AVERAGE:"Average\:%8.2lf %s" \
GPRINT:cdeff:MAX:"Maximum\:%8.2lf %s\n" \
COMMENT:"Total Out\: 215.42 GB\n"

RRDTool Says:

OK
------------------------------------------------------
[[email protected] rra]# rrdtool info router_tnnet_traffic_in_8.rrd
filename = "router_tnnet_traffic_in_8.rrd"
rrd_version = "0003"
step = 300
last_update = 1323596102
ds[traffic_in].type = "COUNTER"
ds[traffic_in].minimal_heartbeat = 600
ds[traffic_in].min = 0,0000000000e+00
ds[traffic_in].max = 1,0000000000e+09
ds[traffic_in].last_ds = "4111776361"
ds[traffic_in].value = 1,9658692288e+07
ds[traffic_in].unknown_sec = 0
ds[traffic_out].type = "COUNTER"
ds[traffic_out].minimal_heartbeat = 600
ds[traffic_out].min = 0,0000000000e+00
ds[traffic_out].max = 1,0000000000e+09
ds[traffic_out].last_ds = "1605492872"
ds[traffic_out].value = 3,9852892508e+06
ds[traffic_out].unknown_sec = 0
rra[0].cf = "AVERAGE"
rra[0].rows = 600
rra[0].cur_row = 59
rra[0].pdp_per_row = 1
rra[0].xff = 5,0000000000e-01
rra[0].cdp_prep[0].value = NaN
rra[0].cdp_prep[0].unknown_datapoints = 0
rra[0].cdp_prep[1].value = NaN
rra[0].cdp_prep[1].unknown_datapoints = 0
rra[1].cf = "AVERAGE"
rra[1].rows = 700
rra[1].cur_row = 168
rra[1].pdp_per_row = 6
rra[1].xff = 5,0000000000e-01
rra[1].cdp_prep[0].value = 9,8253823475e+06
rra[1].cdp_prep[0].unknown_datapoints = 0
rra[1].cdp_prep[1].value = 1,9904230903e+06
rra[1].cdp_prep[1].unknown_datapoints = 0
rra[2].cf = "AVERAGE"
rra[2].rows = 775
rra[2].cur_row = 176
rra[2].pdp_per_row = 24
rra[2].xff = 5,0000000000e-01
rra[2].cdp_prep[0].value = 1,5217570698e+08
rra[2].cdp_prep[0].unknown_datapoints = 0
rra[2].cdp_prep[1].value = 3,3124389590e+07
rra[2].cdp_prep[1].unknown_datapoints = 0
rra[3].cf = "AVERAGE"
rra[3].rows = 797
rra[3].cur_row = 740
rra[3].pdp_per_row = 288
rra[3].xff = 5,0000000000e-01
rra[3].cdp_prep[0].value = 5,4036020200e+08
rra[3].cdp_prep[0].unknown_datapoints = 0
rra[3].cdp_prep[1].value = 1,8189606515e+08
rra[3].cdp_prep[1].unknown_datapoints = 0
rra[4].cf = "MAX"
rra[4].rows = 600
rra[4].cur_row = 488
rra[4].pdp_per_row = 1
rra[4].xff = 5,0000000000e-01
rra[4].cdp_prep[0].value = NaN
rra[4].cdp_prep[0].unknown_datapoints = 0
rra[4].cdp_prep[1].value = NaN
rra[4].cdp_prep[1].unknown_datapoints = 0
rra[5].cf = "MAX"
rra[5].rows = 700
rra[5].cur_row = 512
rra[5].pdp_per_row = 6
rra[5].xff = 5,0000000000e-01
rra[5].cdp_prep[0].value = 9,8253823475e+06
rra[5].cdp_prep[0].unknown_datapoints = 0
rra[5].cdp_prep[1].value = 1,9904230903e+06
rra[5].cdp_prep[1].unknown_datapoints = 0
rra[6].cf = "MAX"
rra[6].rows = 775
rra[6].cur_row = 346
rra[6].pdp_per_row = 24
rra[6].xff = 5,0000000000e-01
rra[6].cdp_prep[0].value = 9,8253823475e+06
rra[6].cdp_prep[0].unknown_datapoints = 0
rra[6].cdp_prep[1].value = 2,1118433092e+06
rra[6].cdp_prep[1].unknown_datapoints = 0
rra[7].cf = "MAX"
rra[7].rows = 797
rra[7].cur_row = 630
rra[7].pdp_per_row = 288
rra[7].xff = 5,0000000000e-01
rra[7].cdp_prep[0].value = 9,8253823475e+06
rra[7].cdp_prep[0].unknown_datapoints = 0
rra[7].cdp_prep[1].value = 2,6925616890e+06
rra[7].cdp_prep[1].unknown_datapoints = 0
-----------------------------------------------------------------------

Any idea ? I'm using cacti on linux centos 6 32bit and snmpv2.

Thanks.


Top
 Profile  
 
 Post subject: Re: Cacti grqaph problem
PostPosted: Sun Dec 11, 2011 10:15 am 
Offline
Developer
User avatar

Joined: Thu Dec 02, 2004 2:46 am
Posts: 22376
Location: Muenster, Germany
That looks very much like a COUNTER overflow. On a 5min polling interval, this will occur when traffic exceeds 114 mbps (BTB: this is a sticky thread), So please use 64 bit COUNTER graphs, then
R.

_________________
Official Cacti Documentation
Official Debugging Help
Central Plugin Repository
Central Templates Repository


Top
 Profile  
 
 Post subject: Re: Cacti grqaph problem
PostPosted: Mon Mar 04, 2019 6:09 am 
Offline

Joined: Thu Feb 28, 2019 6:58 pm
Posts: 9
Hello,
thx for replay.
30 min ego I delate/add device and create 64bit graph, but it's still don't working. I still have empty grap with -nan value


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC - 5 hours


Who is online

Users browsing this forum: Bing [Bot] and 16 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  

Protected by Anti-Spam ACP Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group