I am setting up Elasticsearch, Fluentd and Kibana stack. Initially we had only single ES node.
Now we're setting up Elasticsearch cluster with two nodes to provide HA to Kibana. But the Kibana show clsuter health RED(although the failover happens properly) when another node is elected as master.
Following this topic, I tried to setup cluster with client node. But whenever the elaticsearch on local machine is stopped the dashbaord stops and Kibana shows unable to connect to Elasticsearch at http://localhost:9200
How to configure Kibana to be functional even the ES on local machine where Kibana is running goes down? The Kibana isn't picking up new master from cluster once failover is happened.
I think Kibana's elasticsearch_url option is singular, i.e. it needs to be exactly one URL. As you've noticed this creates a single point of failure. To mitigate that you could e.g. use HAproxy to front of two or more ES instances and point Kibana to the HAproxy hostname/port.
Thing is if you have a single data node and you shut it down, then the client cannot do anything as it has no data to serve.
So you need to have two, or ideally 3, data nodes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.