One Kibana for multiple server

Hi,

I installed ELK on server1 and Elasticsearch/Logstash on server2.
Now I want to visualise the data from server2 in Kibana on server1.

I edited elasticsearch.yml (server1):
cluster.name: "elkcluster"
node.master: false
node.data: false

elasticsearch.yml (server2):
cluster.name: "elkcluster"

But I don't get logs from server2.

Can you help me ?

I would suggest confirming that your ingest pipeline is working correctly by searching directly against the index on server2. If you can see the data, then try searching against server 1 ES.

First I got an error:
not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again

Second:
I can't find other nodes in kibana or can't create an index pattern with the cluster or node name.

Can you share your elasticsearch.yml file (being sure to redact anything sensitive in it such as passwords)? It sounds like you have a cluster without a master node.

master elasticsearch.yml:

cluster.name: "elkcluster"
node.name: "master"
node.master: false
node.data: false
path:
logs: /data/elk/log
data: /data/elk/data
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"

node2 elasticsearch.yml:

cluster.name: "elkcluster"
node.name: "node2"
path:
logs: /data/elk/log
data: /data/elk/data
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"

I think the issue is missing discovery settings, so the two nodes do not wind up seeing each other. Look at these docs for details: https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html

Like this?

master elasticsearch.yml:

cluster.name: "elkcluster"
node.name: "master"
node.master: false
node.data: false
discovery.zen.ping.unicast.hosts:
- node2.FQDN
path:
logs: /data/elk/log
data: /data/elk/data
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"

node2 elasticsearch.yml:

cluster.name: "elkcluster"
node.name: "node2"
discovery.zen.ping.unicast.hosts:
- master.FQDN
path:
logs: /data/elk/log
data: /data/elk/data
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"

Yeah, that looks right to me. I might also verify that master and node2 can see each other on the network (try pinging the one from the other).

I can ping each other but it doesn't work.

master log:

publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[...]

node2 log:

[2018-03-08T17:38:22,652][INFO ][o.e.t.TransportService ] [node2] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2018-03-08T17:38:25,934][INFO ][o.e.c.s.ClusterService ] [node2] new_master {node2}{CbKICAGjRmihWSYoSdQTgg}{NJVRju7ARN2n1VZCdPoH0w}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-03-08T17:38:26,120][INFO ][o.e.h.n.Netty4HttpServerTransport] [node2] publish_address {172.19.0.2:9200}, bound_addresses {[::]:9200}
[2018-03-08T17:38:26,121][INFO ][o.e.n.Node ] [node2] started
[2018-03-08T17:38:27,678][INFO ][o.e.g.GatewayService ] [node2] recovered [3] indices into cluster_state
[2018-03-08T17:38:29,929][INFO ][o.e.c.r.a.AllocationService] [node2] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[logstash-2018.03.08][0]] ...]).

I think I see what the issue is. If you do not specify a network.host, ES binds to localhost (127.0.0.1), which means that nothing outside the machine can connect to it. See this for explanation of how to configure that property: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html

I installed a new master and I have other problems:

master elasticsearch.yml
cluster.name: "elkcluster"
node.name: "master"
#need to uncomment because it cause error in kibana dashboard
#node.master: false
#node.data: false
network.host: localhost
http.port: 9200
discovery.zen.ping.unicast.hosts:

  • node2.FQDN

node2 elasticsearch.yml
cluster.name: "elkcluster"
node.name: "node2"
#network.host: master.FQDN
transport.tcp.port: 9200
discovery.zen.ping.unicast.hosts:
- master.FQDN

node2 logs:
timed out after [5s] resolving host [master.FQDN]
[2018-03-13T19:42:28,248][INFO ][o.e.c.s.ClusterService ] [node2] new_master {node2}

your master configuration specifies network.host as localhost. This means no other machine can connect to that ES instance, as it binds to the loopback (127.0.0.1). Also, the timeout for resolving the hostname suggests that you are using a DNS name that does not exist in DNS. Are you literally using "master.FQDN" in node2's config ? That will not resolve. It should be something like "master.yourdomain.com".

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.