We are running Cacti 1.2.7 and recently, a number of our graphs that use the ucd/net memory template started showing negative amounts of "Memory Used".
The math seems right when subtracting Memory Free from Total Memory, but it is now just negative.
All servers use the same template, but it isn't happening on all servers
It seemds to just be happening on CentOS 7 servers
It isn't specifically ones with kernel 1062.1.2 vs 1062.1.1.
It was looking like it was just servers with less than 10 GB of memory, but that isn't the case.
Deleting and re-creating the graphs doesn't make any difference either.
Some graphs using the ucd/net memory template starting showing negative amounts
Moderators: Moderators, Developers
Re: Some graphs using the ucd/net memory template starting showing negative amounts
Put the device into debug mode and monitor the output that the poller is seeing.
Official Cacti Developer
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Re: Some graphs using the ucd/net memory template starting showing negative amounts
I don't see anything wrong?
Code: Select all
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] DEBUG: In Poller, About to Start Polling of Device for Device ID 290
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] NOTE: Device[290] Updating Full System Information Table
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[2] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssCpuRawNice, oid: .1.3.6.1.4.1.2021.11.51.0, value: 12475
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] Device has no information for recache.
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[4] DS[3409] SNMP: v2: bsi-util-btp, dsname: ssRawSwapIn, oid: .1.3.6.1.4.1.2021.11.62.0, value: 0
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[3] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssCpuRawSoftIRQ, oid: .1.3.6.1.4.1.2021.11.61.0, value: 229217
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] DEBUG: Entering SNMP Ping
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[4] DS[3409] SNMP: v2: bsi-util-btp, dsname: ssRawSwapOut, oid: .1.3.6.1.4.1.2021.11.63.0, value: 0
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[3] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssRawInterrupts, oid: .1.3.6.1.4.1.2021.11.59.0, value: 537060311
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[4] DS[3409] SNMP: v2: bsi-util-btp, dsname: ssIORawReceived, oid: .1.3.6.1.4.1.2021.11.58.0, value: 2408528
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[2] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssCpuRawUser, oid: .1.3.6.1.4.1.2021.11.50.0, value: 8880952
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[3] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssCpuRawInterrupt, oid: .1.3.6.1.4.1.2021.11.56.0, value: 0
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[4] DS[3410] SNMP: v2: bsi-util-btp, dsname: mem_buffers, oid: .1.3.6.1.4.1.2021.4.14.0, value: 9360
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] NOTE: There are '5' Polling Items for this Device
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[2] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssCpuRawWait, oid: .1.3.6.1.4.1.2021.11.54.0, value: 42741
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[3] DS[3409] SNMP: v2: bsi-util-btp, dsname: ssIORawSent, oid: .1.3.6.1.4.1.2021.11.57.0, value: 8172140
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[4] DS[3411] SNMP: v2: bsi-util-btp, dsname: mem_cache, oid: .1.3.6.1.4.1.2021.4.15.0, value: 4734960
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[6] Device has no information for recache.
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[2] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssRawContexts, oid: .1.3.6.1.4.1.2021.11.60.0, value: 631128081
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[2] DS[3408] SNMP: v2: bsi-util-btp, dsname: ssCpuRawKernel, oid: .1.3.6.1.4.1.2021.11.55.0, value: 0
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] DS[3412] SNMP: v2: bsi-util-btp, dsname: mem_free, oid: .1.3.6.1.4.1.2021.4.6.0, value: 6287512
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] DS[3413] SNMP: v2: bsi-util-btp, dsname: memAvailSwap, oid: .1.3.6.1.4.1.2021.4.4.0, value: 0
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] DS[3414] SNMP: v2: bsi-util-btp, dsname: memTotalReal, oid: .1.3.6.1.4.1.2021.4.5.0, value: 7990136
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] DS[3415] SNMP: v2: bsi-util-btp, dsname: memTotalSwap, oid: .1.3.6.1.4.1.2021.4.3.0, value: 0
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[6] NOTE: There are '2' Polling Items for this Device
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] DS[3417] SNMP: v2: bsi-util-btp, dsname: load_1min, oid: .1.3.6.1.4.1.2021.10.1.3.1, value: 0.36
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[2] Total Time: 0.049 Seconds
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[6] DS[3419] SNMP: v2: bsi-util-btp, dsname: load_15min, oid: .1.3.6.1.4.1.2021.10.1.3.3, value: 0.49
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[4] Total Time: 0.049 Seconds
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[3] Total Time: 0.05 Seconds
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[6] DS[3420] SNMP: v2: bsi-util-btp, dsname: load_5min, oid: .1.3.6.1.4.1.2021.10.1.3.2, value: 0.45
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[2] DEBUG: HOST COMPLETE: About to Exit Device Polling Thread Function
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] DEBUG: The Value of Active Threads is 14 for Device ID 290
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[3] DEBUG: HOST COMPLETE: About to Exit Device Polling Thread Function
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[5] Total Time: 0.038 Seconds
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[1] Device has no information for recache.
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] Device[290] HT[4] DEBUG: HOST COMPLETE: About to Exit Device Polling Thread Function
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] DEBUG: The Value of Active Threads is 14 for Device ID 290
2019/10/12 16:03:06 - SPINE: Poller[1] PID[12991] DEBUG: The Value of Active Threads is 12 for Device ID 290
Re: Some graphs using the ucd/net memory template starting showing negative amounts
Not sure really, the output does look OK.
Official Cacti Developer
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Re: Some graphs using the ucd/net memory template starting showing negative amounts
This is the CDEF and graph template that haven't changed since we first used them.
Re: Some graphs using the ucd/net memory template starting showing negative amounts
My only thought is that if this uses the Gauge type, and the value overflows or drops, it will appear negative.
Official Cacti Developer
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Re: Some graphs using the ucd/net memory template starting showing negative amounts
Actually, it looks like the math on the graphs was never totally correct and cached memory was being shown as a separate item and not subtracted from free memory. When we went to CentOS 7, we stopped using swap which then further changed the calculations enough to make this issue obvious.
Re: Some graphs using the ucd/net memory template starting showing negative amounts
Have you corrected this locally" What steps did you take?
Official Cacti Developer
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Cacti Resources:
Cacti Website (including releases)
Cacti Issues
Cacti Development Releases
Cacti Development Documentation
My resources:
How to submit Pull Requests
Development Wiki and How To's
Updated NetSNMP Memory template for Cacti 1.x
Cisco SFP template for Cacti 0.8.8
Re: Some graphs using the ucd/net memory template starting showing negative amounts
I ended up having to re-work the CDEFs and also re-organized that graphs items (which then required more CDEF work).