[INFO] Remote Poller Setup (Cacti 1.1.6)

If you figure out how to do something interesting/cool in Cacti and want to share it with the community, please post your experience here.

Moderators: Moderators, Developers

Post Reply
Author
Message
Piratos
Posts: 26
Joined: Thu May 03, 2007 6:22 am

[INFO] Remote Poller Setup (Cacti 1.1.6)

#1 Post by Piratos » Tue May 23, 2017 4:55 am

Unfortunately, there isn't much info on how to set up Remote Poller. Hopefully, this will help.

Please replace the {main_IP} etc. with your own parameters.


Main Cacti
Cacti Version: 1.1.6 (upgraded from CactiEZ 0.7)
Spine Version: 1.1.6
IP-address: main_IP
MySQL User : cactiuser
MySQL Database: cacti
MySQL PW: main_PW

Spare Cacti
Cacti Version: 1.1.6 (upgraded from CactiEZ 0.7)
Spine Version: 1.1.6
IP-address: spare_IP
MySQL User: cactiuser
MySQL Database: cacti
MySQL PW: spare_PW


On Main Cacti

Comment bind-address=127.0.0.1 in /etc/my.cnf.d/server.cnf:

Code: Select all

shell> vi /etc/my.cnf.d/server.cnf

#bind-address=127.0.0.1
Add user login from Spare Cacti:

Code: Select all

shell> mysql -u root -p

mysql> GRANT ALL ON cacti.* TO [email protected]{spare_IP} IDENTIFIED BY '{main_PW}';
mysql> flush privileges;
mysql> exit;
Restart MySQL:

Code: Select all

shell> service mysql restart
			Shutting down MySQL... SUCCESS!
			Starting MySQL.170519 16:34:25 mysqld_safe Logging to '/var/log/mysqld.log'.
			170519 16:34:25 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
			. SUCCESS!
Set up iptables (if needed):

Open port 3306 (MySQL) for Spare Cacti:

Code: Select all

shell> iptables -I RH-Firewall-1-INPUT -p tcp -s {spare_IP} -m tcp --dport 3306 -j ACCEPT
Set up /usr/local/spine/etc/spine.conf:

Code: Select all

shell> vi /usr/local/spine/etc/spine.conf

DB_Host			localhost
DB_Database		cacti
DB_User			cactiuser
DB_Pass			{main_PW}
DB_Port			3306

RDB_Host			{spare_IP}
RDB_Database		cacti
RDB_User			cactiuser
RDB_Pass			{spare_PW}
RDB_Port			3306

On Spare Cacti

Test MySQL from spare to main:

Code: Select all

shell> mysql -u cactiuser --password={main_PW} -h {main_IP}
Comment bind-address=127.0.0.1 in /etc/my.cnf.d/server.cnf:

Code: Select all

shell> vi /etc/my.cnf.d/server.cnf

#bind-address=127.0.0.1
Add user login from Main Cacti:

Code: Select all

shell> mysql -u root -p

mysql> GRANT ALL ON cacti.* TO [email protected]{main_IP} IDENTIFIED BY '{spare_PW}';
mysql> flush privileges;
mysql> exit;
Restart MySQL:

Code: Select all

shell> service mysql restart
			Shutting down MySQL... SUCCESS!
			Starting MySQL.170519 16:34:25 mysqld_safe Logging to '/var/log/mysqld.log'.
			170519 16:34:25 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
			. SUCCESS!
Set up iptables (if needed):

Open port 3306 (MySQL) for Main Cacti:

Code: Select all

shell> iptables -I RH-Firewall-1-INPUT -p tcp -s {main_IP} -m tcp --dport 3306 -j ACCEPT
Set up /usr/main/spine/etc/spine.conf:

Code: Select all

shell> vi /usr/local/spine/etc/spine.conf

DB_Host			localhost
DB_Database		cacti
DB_User			cactiuser
DB_Pass			{spare_PW}
DB_Port			3306

RDB_Host			{main_IP}
RDB_Database		cacti
RDB_User			cactiuser
RDB_Pass			{main_PW}
RDB_Port			3306
Set up /var/www/html/include/config.php:

Code: Select all

shell> vi /var/www/html/include/config.php

$rdatabase_type     = 'mysql';
$rdatabase_default  = 'cacti';
$rdatabase_hostname = '{main_IP}';
$rdatabase_username = 'cactiuser';
$rdatabase_password = '{main_PW}';
$rdatabase_port     = '3306';
$rdatabase_ssl      = false;

#$poller_id = 1;
$poller_id = 2;

On Main Cacti

Test MySQL from main to spare:

Code: Select all

shell> mysql -u cactiuser --password={spare_PW} -h {spare_IP}

Browser interface

Data Collection >> Data Collectors

Possibly a collector has no name and thus no link to edit it. It can be reached on:

http://{main_IP}/pollers.php?action=edit&id={poller_id}
Last edited by Piratos on Wed Jun 21, 2017 4:11 am, edited 2 times in total.

tertius
Cacti User
Posts: 71
Joined: Wed Mar 01, 2017 2:34 pm

Re: [INFO] Remote Poller Setup (Cacti 1.1.6)

#2 Post by tertius » Tue May 23, 2017 8:58 am

Small typo: In the box after "Set up /usr/local/spine/etc/spine.conf:" you wrote "shell> vi /etc/my.cnf.d/server.cnf", but should be "shell> vi /usr/local/spine/etc/spine.conf".
Happens 2 times.

Piratos
Posts: 26
Joined: Thu May 03, 2007 6:22 am

Re: [INFO] Remote Poller Setup (Cacti 1.1.6)

#3 Post by Piratos » Tue May 23, 2017 10:50 am

Thanks tertius. I have edited the post.

Piratos
Posts: 26
Joined: Thu May 03, 2007 6:22 am

Re: [INFO] Remote Poller Setup (Cacti 1.1.6)

#4 Post by Piratos » Wed Jun 21, 2017 4:10 am

I made two typing errors and wrote "uncomment" while I meant "comment".

Corrected.

Piratos
Posts: 26
Joined: Thu May 03, 2007 6:22 am

Re: [INFO] Remote Poller Setup (Cacti 1.1.6)

#5 Post by Piratos » Thu Jun 22, 2017 5:13 am

Dear stormonts,

Please note that you have to put a # in front of bind-address=127.0.0.1 so the bind-address is ignored:

Code: Select all

#bind-address=127.0.0.1
To be honest, I gave up on the new Cacti 1+ version (for now). I was hoping the Remote Poller Setup would make two redundant Cacti systems with two cloned databases, combining the poller process over both systems. So that is a normal situation each poller would do halve of the poller requests and in case one Cacti system would go down the other system would take over all poller requests. I do not believe that is the case here.

Currently I run a setup with two Cacti 0.8.8a systems, where I have set up one system to have a master MySQL datebase and the other system runs a slave MySQL database. Each system runs all poller tasks. I was hoping that Remote Poller Setup would do something similar, but with splitting the poller process over both systems (so not all devices are polled by both Cacti systems).

cigamit
Developer
Posts: 2780
Joined: Thu Apr 07, 2005 3:29 pm
Location: B/CS Texas
Contact:

Re: [INFO] Remote Poller Setup (Cacti 1.1.6)

#6 Post by cigamit » Thu Jun 22, 2017 11:12 pm

The current Remote Poller isn't built for HA purposes, but for scaling purposes. While we may incorporate HA functionality in the future, its not currently high on my priority list.

Just thinking about it for a minute, you could easily make an HA setup with the current Remote Poller setup by adding a few things to it. You would have to be rsyncing the RRDs over to the 2nd host (or shared storage backend for them), and then you just setup a small plugin that checks the heartbeat on the Main Poller, and if it isn't responding, you swap the config.php to make it the Main Poller (or instead just do a single DB query that assigns all the devices to the 2nd Poller).

Post Reply