Kibana URL uses a Single host to serve all request

Hi All,

We have 3 nodes in the cluster with 1 primary and 2 replicas.
For DR activities we perform below steps :

  1. We are setting number of replica to 1 and then Exclude 1st node from cluster in single DC , while other 2 nodes in 2nd DC are included in cluster.
  2. shutdown 1st node and perform validation on DC2 nodes kibana console.

Now the issue we are facing is that :
When we do simple GET /_cluster/health from other 2 nodes in cluster they throw the error as below on console.

{
"statusCode": 500,
"error": "Internal Server Error",
"message": "An internal server error occurred"
}

Error on server logs :
{"type":"error","@timestamp":"2024-02-11T07:10:48Z","tags":,"pid":22807,"level":"error","error":{"message":"Cannot throw non-error object","name":"Error","stack":"Error: Cannot throw non-error object\n at module.exports.internals.Manager.execute (/*/napsysrecs/kibana-7.5.0-linux-x86_64/node_modules/hapi/lib/toolkit.js:42:33)\n at process._tickCallback (internal/process/next_tick.js:68:7)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":"?path=%2F_cluster%2Fhealth&method=GET","query":{"path":"/_cluster/health","method":"GET"},"pathname":"/api/console/proxy","path":"/api/console/proxy?path=%2F_cluster%2Fhealth&method=GET","href":"/api/console/proxy?path=%2F_cluster%2Fhealth&method=GET"},"message":"Cannot throw non-error object"}

Shouldn't the other 2 nodes serve the console requests when 1 node is down and new request should go to 2nd DC ?

The curl url on 2nd DC is still pointing to 1st DC host which is down.
If I copy the command as curl from console it shows 1st DC host.
So we are bit clueless at the moment. Kindly help how to use Kibana in such scenario :frowning_face:

Do you have elasticsearch.hosts in your Kibana config pointing to all three nodes?

Yes the kibana config file has all 3 servers within it
elasticsearch.hosts: ["server1:9200","server2:9200","server3:9200"]

I have been advised to try below in kibana config

elasticsearch.sniffInterval: Time in milliseconds between requests to check Elasticsearch for an updated list of nodes. Default: false
elasticsearch.sniffOnStart: Attempt to find other Elasticsearch nodes on startup. Default: false
elasticsearch.sniffOnConnectionFault: Update the list of Elasticsearch nodes immediately following a connection fault. Default: false

Apologies for the delayed response. It looks like your Kibana is running on 7.5. I searched around for a similar error and found this. Based on that issue, I believe this was a bug that was fixed in 7.6. Upgrading to 7.6 (or even better 7.17) should resolve this error.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.