Ged
December 23, 2020, 3:26pm
1
Hi Team,
I want to share my ELK stack cluster configuration shown below asking you to review it and share your thoughts and answer some questions.
Elasticsearch nodes configuration for ELK SERVER A1 and ELK SERVER B1:
node.master: true
node.data: true
node.ingest: false
node.ml: false
xpack.ml.enabled: false
cluster.remote.connect: false
Elasticsearch nodes configuration for ELK SERVER A2 and ELK SERVER B2:
node.master: true
node.data: false
node.ingest: false
node.ml: false
xpack.ml.enabled: false
cluster.remote.connect: false
Data flow is:
App servers in both datacenters push data via UDP to Logstashes (using UDP input plugin):
those in DC A to Logstash on ELK SERVER A2
those in DC B to Logstash on ELK SERVER B2
Logstashes forward data to Elasticsearch data nodes:
Logstash on ELK SERVER A2 to Elasticsearch data node on ELK SERVER A1
Logstash on ELK SERVER B2 to Elasticsearch data node on ELK SERVER B1
Data is replicated between Elasticsearch data nodes on ELK SERVER A1 and ELK SERVER B1
Kibana instances are connected to:
on ELK SERVER A1 to Elasticsearch data nodes on ELK SERVER A1
on ELK SERVER B1 to Elasticsearch data nodes on ELK SERVER B1
My questions:
Is it OK to push data from Logstashes to Elasticsearch data nodes ? If not, what is the best approach ?
Is it OK for both Kibana instances to be connected as described above ?
Or maybe all my configuration is not recommended and i should create all the ELK stack in another way ?
Thank you in advance !
Ged
Ged:
Logstashes forward data to Elasticsearch data nodes:
Logstash on ELK SERVER A2 to Elasticsearch data node on ELK SERVER A1
Logstash on ELK SERVER B2 to Elasticsearch data node on ELK SERVER B1
Both Logstash instances should ideally have both Elasticsearch data nodes configured so you do not lose data if one is unavailable.
Yes, that is the best option in this scenario.
It would be good for Kibana to also be able to connect to all any data node for failover.
It looks like you have tried to design for high availability and given the outlined configuration you will be able to handle a single node going down without losing access to the cluster. It is however worth noting that if you want to be able to continue operating if the two data centers get disconnected or one crashes completely, you will need a third data center. It is impossible to deploy Elasticsearch in a highly available way (with respect to data centre failure) across just 2 data centres .
Ged
January 8, 2021, 11:07am
3
Many thanks for you advise Christian !
I have one more question:
When connecting Logstash/Kibana to two Elasticsearch nodes how requests are balanced ? Si it round-robin ? Or maybe it's configurable and it's possible to use master/failover ? I mean sending requests to master if available and if it's down sending to failover one ?
Thank you !
Ged
system
(system)
Closed
February 5, 2021, 11:07am
4
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.