I'm working with the newest version 6.7 of the Stack in a production environment.
I configured my Kibana instance to point to all of my elasticsearch instances in the cluster with the config "elasticserach.hosts", to setup the fault tolerance. But when one of the elasticsearch instance goes down I have an "Internal Server Error" and Kibana can't connect to the cluster. But if I remove the shutted down node from the elasticsearch.hosts configuration, Kibana works fine. Am I doing something wrong ?
Can you provide the yml config snippet you are using to configure this? (feel free to redact actual hostnames, just need to see the syntax).
Thank you for the answer.
This is my kibana.yml
server.host: "logmanagera" server.port: 5601 elasticsearch.hosts: ["http://logmanagera:9200","http://logmanagerb:9200"] elasticsearch.username: "kibana" elasticsearch.password: "kibana"
When "logmanagerb" stops Kibana doesn't reach the cluster. But if I remove "logamangerb" in the config and restart Kibana, all works fine.
It seems like you might have found a bug. I cannot reproduce this locally, but I do see that Kibana sends requests to all configured hosts even when the first one in the list is alive and working, so it might be related to that. Can you please open an issue at https://github.com/elastic/kibana/issues ?
Issue opened. Thanks a lot.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.