Setting up a Separate Monitoring Cluster

monitoring

(Geeshan S) #1

Hi All,

I'm trying to setup a separate cluster(kibanacluster) for monitoring my primary elasticsearch cluster (marveltest) . Below are the ES, Marvel and Kibana versions I'm using. The ES version is fixed for the moment. I can update or downgrade the other components if needed.

kibana-4.4.1
elasticsearch-2.2.1
marvel-agent-2.2.1

The monitoring cluster and Kibana are both running in the host 192.168.2.124 and the primary cluster is running in a separate host 192.168.2.116.

192.168.2.116: elasticsearch.yml has the below entry

marvel.agent.exporter.es.hosts: ["192.168.2.124"]

marvel.enabled: true
marvel.agent.exporters:

id1:
type: http
host: ["http://192.168.2.124:9200"]

Looking at the DEBUG logs in the monitoring cluster i can see data is coming from the primary cluster but is getting "filtered" since the cluster name is different.

......
[2016-07-04 16:33:25,144][DEBUG][transport.netty ] [nodek] connected to node [{#zen_unicast_2#}{192.168.2.124}{192.168.2.124:9300}]
[2016-07-04 16:33:25,144][DEBUG][transport.netty ] [nodek] connected to node [{#zen_unicast_1#}{192.168.2.116}{192.168.2.116:9300}]
[2016-07-04 16:33:25,183][DEBUG][discovery.zen.ping.unicast] [nodek] [1] filtering out response from {node1}{Rmgg0Mw1TSmIpytqfnFgFQ}{192.168.2.116}{192.168.2.116:9300}, not same cluster_name [marveltest]
[2016-07-04 16:33:26,533][DEBUG][discovery.zen.ping.unicast] [nodek] [1] filtering out response from {node1}{Rmgg0Mw1TSmIpytqfnFgFQ}{192.168.2.116}{192.168.2.116:9300}, not same cluster_name [marveltest]
[2016-07-04 16:33:28,039][DEBUG][discovery.zen.ping.unicast] [nodek] [1] filtering out response from {node1}{Rmgg0Mw1TSmIpytqfnFgFQ}{192.168.2.116}{192.168.2.116:9300}, not same cluster_name [marveltest]
[2016-07-04 16:33:28,040][DEBUG][transport.netty ] [nodek] disconnecting from [{#zen_unicast_2#}{192.168.2.124}{192.168.2.124:9300}] due to explicit disconnect call
[2016-07-04 16:33:28,040][DEBUG][discovery.zen ] [nodek] filtered ping responses: (filter_client[true], filter_data[false])
--> ping_response{node [{nodek}{vQ-Iq8dKSz26AJUX77Ncfw}{192.168.2.124}{192.168.2.124:9300}], id[42], master [{nodek}{vQ-Iq8dKSz26AJUX77Ncfw}{192.168.2.124}{192.168.2.124:9300}], hasJoinedOnce [true], cluster_name[kibanacluster]}
[2016-07-04 16:33:28,053][DEBUG][transport.netty ] [nodek] disconnecting from [{#zen_unicast_1#}{192.168.2.116}{192.168.2.116:9300}] due to explicit disconnect call
[2016-07-04 16:33:28,057][DEBUG][transport.netty ] [nodek] connected to node [{nodek}{vQ-Iq8dKSz26AJUX77Ncfw}{192.168.2.124}{192.168.2.124:9300}]
[2016-07-04 16:33:28,117][DEBUG][discovery.zen.publish ] [nodek] received full cluster state version 32 with size 5589
.....

Thank you,
Geeshan


(Mark Walkom) #2

Please don't post pictures of text, they are difficult to read and some people may not be even able to see them.

Did you follow the docs, here - https://www.elastic.co/guide/en/marvel/2.2/installing-marvel.html#monitoring-cluster


(Geeshan S) #3

Sorry about that. I have edited the post.
Yes I followed the above link in setting it up.
Could this be a licence issue? Since there are two clusters involved?


(Mark Walkom) #4

It's not a license issue.

Can you show the elasticsearch.yml file of each (but remove comments, and empty lines).


(Geeshan S) #5

Hi Mark,

Below are the elasticsearch.yml files.

2.116 (Primary cluster)
cluster.name: marveltest
node.name: node1
bootstrap.mlockall: true
network.host: 192.168.2.116
discovery.zen.ping.unicast.hosts: ["192.168.2.124", "192.168.2.116"]
gateway.recover_after_nodes: 1
gateway.expected_nodes: 1
gateway.recover_after_time: 1m
path.repo: ["/usr/local/surf/ES_BACKUP"]
script.inline: true
script.indexed: true

2.124(Monitoring Cluster)

cluster.name: kibanacluster
node.name: nodek
node.master: false
node.data: false
network.host: 192.168.2.124
discovery.zen.ping.unicast.hosts: ["192.168.2.116", "192.168.2.124"]


(Mark Walkom) #6

I'd suggest you read https://www.elastic.co/guide/en/marvel/current/installing-marvel.html#monitoring-cluster, as I mentioned.

However you don't point each cluster discovery at the other node.


(system) #7