Results 1 to 9 of 9

Thread: [RESOLVE] Realm configuration problem

  1. #1
    Junior Member
    Join Date
    May 2014
    Location
    Bordeaux - France
    Posts
    17

    [RESOLVE] Realm configuration problem

    Hi,

    I would like to use realm to monitoring different client. I tried to configure the realm and it works few second (i can see them in the webui) but now i can't see anything.

    The check seems work, my slave scheduler/poller receive command.

    How can i get back my realm on my webui ?

    Arbiter conf :
    root@shinken-master /etc/shinken # cat arbiters/arbiter-master.cfg
    #================================================= ==============================
    # ARBITER
    #================================================= ==============================
    # Description: The Arbiter is responsible for:
    # - Loading, manipulating and dispatching the configuration
    # - Validating the health of all other Shinken daemons
    # - Issuing global directives to Shinken daemons (kill, activate-spare, etc.)
    # http://www.shinken-monitoring.org/wi...bjects/arbiter
    #================================================= ==============================
    # IMPORTANT: If you use several arbiters you MUST set the host_name on each
    # servers to its real DNS name ('hostname' command).
    #================================================= ==============================
    define arbiter {
    arbiter_name arbiter-master
    #host_name node1 ; CHANGE THIS if you have several Arbiters
    address 192.168.11.181 ; DNS name or IP
    port 7770
    spare 0 ; 1 = is a spare, 0 = is not a spare

    ## Interesting modules:
    # - CommandFile = Open the named pipe nagios.cmd
    # - Mongodb = Load hosts from a mongodb database
    # - PickleRetentionArbiter = Save data before exiting
    # - NSCA = NSCA server
    # - VMWare_auto_linking = Lookup at Vphere server for dependencies
    # - GLPI = Import hosts from GLPI
    # - import-glpi = Import configuration from GLPI (need plugin monitoring for GLPI in server side)
    # - TSCA = TSCA server
    # - MySQLImport = Load configuration from a MySQL database
    # - WS_Arbiter = WebService for pushing results to the arbiter
    # - Collectd = Receive collectd perfdata
    # - SnmpBooster = Snmp bulk polling module, configuration linker
    # - Landscape = Import hosts from Landscape (Ubuntu/Canonical management tool)
    # - AWS = Import hosts from Amazon AWS (here EC2)
    # - IpTag = Tag an host based on it's IP range
    # - FileTag = Tag an host if it's on a flat file
    # - CSVTag = Tag an host from the content of a CSV file

    modules CommandFile
    #modules CommandFile, Mongodb, NSCA, VMWare_auto_linking, WS_Arbiter, Collectd, Landscape, SnmpBooster, AWS

    # Enable https or not
    use_ssl 0
    # enable certificate/hostname check, will avoid man in the middle attacks
    hard_ssl_name_check 0

    ## Uncomment these lines in a HA architecture so the master and slaves know
    ## how long they may wait for each other.
    #timeout 3 ; Ping timeout
    #data_timeout 120 ; Data send timeout
    #max_check_attempts 3 ; If ping fails N or more, then the node is dead
    #check_interval 60 ; Ping node every N seconds
    }

    the scheduler conf :
    root@shinken-master /etc/shinken # cat schedulers/scheduler-master.cfg
    #================================================= =============================
    # SCHEDULER (S1_Scheduler)
    #================================================= ==============================
    # The scheduler is a "Host manager". It gets the hosts and their services,
    # schedules the checks and transmit them to the pollers.
    # Description: The scheduler is responsible for:
    # - Creating the dependancy tree
    # - Scheduling checks
    # - Calculating states
    # - Requesting actions from a reactionner
    # - Buffering and forwarding results its associated broker
    # http://www.shinken-monitoring.org/wi...ects/scheduler
    #================================================= ==============================
    define scheduler {
    scheduler_name scheduler-master ; Just the name
    address 192.168.11.181 ; IP or DNS address of the daemon
    port 7768 ; TCP port of the daemon
    ## Optional
    spare 0 ; 1 = is a spare, 0 = is not a spare
    weight 1 ; Some schedulers can manage more hosts than others
    timeout 3 ; Ping timeout
    data_timeout 120 ; Data send timeout
    max_check_attempts 3 ; If ping fails N or more, then the node is dead
    check_interval 60 ; Ping node every N seconds

    ## Interesting modules that can be used:
    # - PickleRetention = Save data before exiting in flat-file
    # - MemcacheRetention = Same, but in a MemCache server
    # - RedisRetention = Same, but in a Redis server
    # - MongodbRetention = Same, but in a MongoDB server
    # - NagiosRetention = Read retention info from a Nagios retention file
    # (does not save, only read)
    # - SnmpBooster = Snmp bulk polling module
    #modules PickleRetention
    modules

    ## Advanced Features
    # Realm is for multi-datacenters
    realm All

    # Skip initial broks creation. Boot fast, but some broker modules won't
    # work with it!
    skip_initial_broks 0

    # In NATted environments, you declare each satellite ip[ort] as seen by
    # *this* scheduler (if port not set, the port declared by satellite itself
    # is used)
    #satellitemap poller-1=1.2.3.4:1772, reactionner-1=1.2.3.5:1773, ...

    # Enable https or not
    use_ssl 0
    # enable certificate/hostname check, will avoid man in the middle attacks
    hard_ssl_name_check 0
    }


    define scheduler{

    scheduler_name scheduler-slave
    address 192.168.11.120
    port 7768
    realm sys
    spare 0
    }
    the poller conf :

    root@shinken-master /etc/shinken # cat pollers/poller-master.cfg
    #================================================= ==============================
    # POLLER (S1_Poller)
    #================================================= ==============================
    # Description: The poller is responsible for:
    # - Active data acquisition
    # - Local passive data acquisition
    # http://www.shinken-monitoring.org/wi...objects/poller
    #================================================= ==============================
    define poller {
    poller_name poller-master
    address 192.168.11.181
    port 7771

    ## Optional
    spare 0 ; 1 = is a spare, 0 = is not a spare
    manage_sub_realms 0 ; Does it take jobs from schedulers of sub-Realms?
    min_workers 0 ; Starts with N processes (0 = 1 per CPU)
    max_workers 0 ; No more than N processes (0 = 1 per CPU)
    processes_by_worker 256 ; Each worker manages N checks
    polling_interval 1 ; Get jobs from schedulers each N seconds
    timeout 3 ; Ping timeout
    data_timeout 120 ; Data send timeout
    max_check_attempts 3 ; If ping fails N or more, then the node is dead
    check_interval 60 ; Ping node every N seconds

    ## Interesting modules that can be used:
    # - NrpeBooster = Replaces the check_nrpe binary. Therefore it
    # enhances performances when there are lot of NRPE
    # calls.
    # - CommandFile = Allow the poller to read a nagios.cmd named pipe.
    # This permits the use of distributed check_mk checks
    # should you desire it.
    # - SnmpBooster = Snmp bulk polling module
    modules

    ## Advanced Features
    #passive 0 ; For DMZ monitoring, set to 1 so the connections
    ; will be from scheduler -> poller.

    # Poller tags are the tag that the poller will manage. Use None as tag name to manage
    # untaggued checks
    #poller_tags None

    # Enable https or not
    use_ssl 0
    # enable certificate/hostname check, will avoid man in the middle attacks
    hard_ssl_name_check 0


    realm All
    }
    #Pollers launch checks
    define poller{
    poller_name poller-slave
    address 192.168.11.120
    port 7771
    realm sys
    }
    and the broker conf :

    root@shinken-master /etc/shinken # cat brokers/broker-master.cfg
    #================================================= ==============================
    # BROKER (S1_Broker)
    #================================================= ==============================
    # Description: The broker is responsible for:
    # - Exporting centralized logs of all Shinken daemon processes
    # - Exporting status data
    # - Exporting performance data
    # - Exposing Shinken APIs:
    # - Status data
    # - Performance data
    # - Configuration data
    # - Command interface
    # http://www.shinken-monitoring.org/wi...objects/broker
    #================================================= ==============================
    define broker {
    broker_name broker-master
    address 192.168.11.181
    port 7772
    spare 0

    ## Optional
    manage_arbiters 1 ; Take data from Arbiter. There should be only one
    ; broker for the arbiter.
    manage_sub_realms 1 ; Does it take jobs from schedulers of sub-Realms?
    timeout 3 ; Ping timeout
    data_timeout 120 ; Data send timeout
    max_check_attempts 3 ; If ping fails N or more, then the node is dead
    check_interval 60 ; Ping node every N seconds

    ## Modules
    # Default: None
    # Interesting modules that can be used:
    # - simple-log = just all logs into one file
    # - livestatus = livestatus listener
    # - ToNdodb_Mysql = NDO DB support
    # - npcdmod = Use the PNP addon
    # - graphite = Use a Graphite time series DB for perfdata
    # - webui = Shinken Web interface
    # - glpidb = Save data in GLPI MySQL database
    modules webui

    # Enable https or not
    use_ssl 0
    # enable certificate/hostname check, will avoid man in the middle attacks
    hard_ssl_name_check 0

    ## Advanced
    realm All
    }

    define broker {
    broker_name broker-slave
    address 192.168.11.120
    port 7772
    spare 0
    realm sys
    }

    On my shinken slave i have this on the log :

    root@shinken-slave /etc/shinken/hosts # cat /var/log/shinken/schedulerd.log
    2014-06-04 15:42:49,049 [1401889369] Warning : Received a SIGNAL 15
    2014-06-04 15:48:21,779 [1401889701] Warning : Printing stored debug messages prior to our daemonization
    2014-06-04 15:49:52,912 [1401889792] HOST ALERT: shinken-slave;DOWN;SOFT;1;[Errno 2] No such file or directory
    2014-06-04 15:54:53,283 [1401890093] HOST ALERT: shinken-slave;DOWN;HARD;2;[Errno 2] No such file or directory
    2014-06-04 15:54:53,285 [1401890093] HOST NOTIFICATION: admin;shinken-slave;DOWN;notify-host-by-email;[Errno 2] No such file or directory
    2014-06-04 16:00:47,735 [1401890447] SERVICE ALERT: shinken-slave;SSH Connexion;CRITICAL;SOFT;1;/bin/sh: 1: /var/lib/shinken/libexec/check_ssh_connexion.py: not found
    2014-06-04 16:00:49,739 [1401890449] HOST ALERT: shinken-slave;DOWN;SOFT;1;[Errno 2] No such file or directory
    2014-06-04 16:01:10,768 [1401890470] SERVICE ALERT: shinken-slave;CPU Stats;CRITICAL;SOFT;1;/bin/sh: 1: /var/lib/shinken/libexec/check_cpu_stats_by_ssh.py: not found
    2014-06-04 16:01:10,768 [1401890470] SERVICE ALERT: shinken-slave;Reboot;CRITICAL;SOFT;1;/bin/sh: 1: /var/lib/shinken/libexec/check_uptime_by_ssh.py: not found
    2014-06-04 16:01:12,773 [1401890472] HOST ALERT: shinken-slave;DOWN;SOFT;1;[Errno 2] No such file or directory
    2014-06-04 16:01:12,773 [1401890472] HOST ALERT: shinken-slave;DOWN;SOFT;1;[Errno 2] No such file or directory
    2014-06-04 16:02:10,023 [1401890530] Warning : Received a SIGNAL 15
    2014-06-04 16:02:10,648 [1401890530] Warning : Printing stored debug messages prior to our daemonization
    2014-06-04 16:03:28,330 [1401890608] Warning : Received a SIGNAL 15
    2014-06-04 16:03:55,245 [1401890635] Warning : Printing stored debug messages prior to our daemonization
    2014-06-04 16:04:26,295 [1401890666] HOST ALERT: shinken-slave;DOWN;SOFT;1;[Errno 2] No such file or directory
    2014-06-04 16:09:26,672 [1401890966] HOST ALERT: shinken-slave;DOWN;HARD;2;[Errno 2] No such file or directory
    2014-06-04 16:09:26,674 [1401890966] HOST NOTIFICATION: admin;shinken-slave;DOWN;notify-host-by-email;[Errno 2] No such file or directory
    2014-06-04 16:13:54,015 [1401891234] HOST ALERT: shinken-slave;DOWN;SOFT;1;[Errno 2] No such file or directory
    It looks like it work :/

    Thank you for your time and your help

  2. #2
    Junior Member
    Join Date
    May 2014
    Location
    Bordeaux - France
    Posts
    17

    Re: Realm configuration problem

    I remove my broker-slave configuration on my broker-master.cfg and now it's work fine

    Just a question, Does my configuration is correct to use realm ?

  3. #3
    Shinken project leader
    Join Date
    May 2011
    Location
    Bordeaux (France)
    Posts
    2,131

    Re: Realm configuration problem

    tou want to use multi-level broker, and for this you must enable the broker_complete_lin=1 to all your realms
    No direct support by personal message. Please open a thread so everyone can see the solution

  4. #4
    Junior Member
    Join Date
    May 2014
    Location
    Bordeaux - France
    Posts
    17

    Re: Realm configuration problem

    Thank you for your help.

    I have another small question

    I would like to have only one webui (the shinken master). Today, i have just the scheduller and the poller on my shinken-slave and only one webui.
    It's work fine.

    But i would like to know, if it's possible to have a shinken-slave totaly independant in case of my master goes down ?
    Because actually, all my host conf are on the master server.

  5. #5
    Shinken project leader
    Join Date
    May 2011
    Location
    Bordeaux (France)
    Posts
    2,131

    Re: Realm configuration problem

    Yes, just use a spare, but it will take the lead if the master goes down.
    No direct support by personal message. Please open a thread so everyone can see the solution

  6. #6
    Junior Member
    Join Date
    May 2014
    Location
    Bordeaux - France
    Posts
    17

    Re: Realm configuration problem

    Hum OK, thank you for the information.

    But what is the best way when i have (for exemple 10 distant sites) :

    - Use one master with some satelite (poller / scheduller) on different sites ? So all my host config are store on my master
    - Or use one master with 10 spare on different sites ?

    I would like to monitor all my distant site from one place, but i also would like independant shinken in case of failure . I hope you can help me, i tried to search on the web, but i didn't find my answer ^^

  7. #7
    Shinken project leader
    Join Date
    May 2011
    Location
    Bordeaux (France)
    Posts
    2,131

    Re: Realm configuration problem

    If you want connexion lost management, you need to put at least a realm in the distant site, so scheduler+poller. Then as there is only one active arbiter, you won't be able to manage distant scheduler lost if the link is down. So a central spare is enough.
    No direct support by personal message. Please open a thread so everyone can see the solution

  8. #8
    Junior Member
    Join Date
    Aug 2014
    Posts
    19

    Re: Realm configuration problem

    Hi Strom,

    I would like to install like you a distributed architecture with realm, but i have some problem can you help me please?

    Like you, i configure realm principal ex: World and realm member: Test

    I configured too a scheduler slave and poller slave on my central shinken by changing realm ALL by realm Test
    I add a host config with including realm Test. i've done the same thing with service config.

    When i restarted the shinken service, i see nothing on my scheduler-slave log, i see all the check on my schelduler central, i dont know Why?

    It's seems that all check are done from Central and not from scheduller-slave and poller-slave.

    Thank in advance for your help



  9. #9
    Realm shows some configuration error that was because of its SQL server that was working on its own procedure. I have to fetch some more details so I read myassignmenthelp review that was wrote by some other subscribers or it was available at the end of the article.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •