I am setting up Elasticsearch, Fluentd and Kibana stack. Initially we had only single ES node.
Now we're setting up Elasticsearch cluster with two nodes to provide HA to Kibana. But the Kibana show clsuter health RED(although the failover happens properly) when another node is elected as master.
Error : Unable to connect to Elasticsearch at http://localhost:9200.
Following this topic, I tried to setup cluster with client node. But whenever the elaticsearch on local machine is stopped the dashbaord stops and Kibana shows unable to connect to Elasticsearch at http://localhost:9200
How to configure Kibana to be functional even the ES on local machine where Kibana is running goes down? The Kibana isn't picking up new master from cluster once failover is happened.
So you have an existing node that has the data, then you setup just a client node and point KB to that?
I think Kibana's
elasticsearch_url option is singular, i.e. it needs to be exactly one URL. As you've noticed this creates a single point of failure. To mitigate that you could e.g. use HAproxy to front of two or more ES instances and point Kibana to the HAproxy hostname/port.
Thing is if you have a single data node and you shut it down, then the client cannot do anything as it has no data to serve.
So you need to have two, or ideally 3, data nodes.
We have two nodes cluster. The node I tried to shut down was the client node. It should have served data from another node in this case.
Then you should refer to @magnusbaeck's post.
Could you please give me the exact link to the post related to my topic?
Looks like there are multiple posts on this link.
Can it be added to feature request to pass the list of hosts to elasticsearch_url so that it can automatically detect the master in case of failover?