Roundrobin kibana names

I have three kibana server
kib01, kib02, kib03 ( 10.x.x.1, 10.x.x.2, 10.x.x.3)

I did setup dns for them
kibglobal -> 10.x.x.1
kibglobal -> 10.x.x.2
kibglobal -> 10.x.x.3

now when I do nslooking I get all this ip in different order.

problem here is that when I run
http://kibglobal:5601 -> it is very very slow
but if I do http://kib01:5601 it works as expacted.

Am I write on track for this kind of setup?

I have following in each hosts kibana.yml file as well
elasticsearch.hosts: ["http://elkm01:9200","http://elkm02:9200","http://elkm03:9200"]

It logs me out every few second. is there a some setting that bind me to that session for longer time

I am using 7.1.1

These are the messages in kibana.log

essage":"GET /logout?next=%2Fapp%2Fkibana%23%2Fdashboards%3F_g%3D()&msg=SESSION_EXPIRED 200 21ms - 9.0B"}

msg=SESSION_EXPIRED"},"res":{"statusCode":304,"responseTime":1,"contentLength":9},"message":"GET /node_modules/@elastic/eui/dist/eui_theme_light.css 304 1ms - 9.0B"}

Other observation.
I have three different cluster, each has three node.
This kind of problem is only on one cluster.
can't figure out what is wrong. I guess no one else is using such a thing

@jbudz - any idea here ?

Were you able to verify all of the kibana instances are not slow when connecting directly (not sure if the comment was referring to clusters or kibana's)? What are you using for your DNS setup?

Regarding the logouts, you'll want to be the same across alll kibana instances. If it's not set, it'll be generated. Kibana stores session information in an encrypted cookie, and you'll get timeouts if your server switches

yes all individual link to host works fine and it does not get time out or logout.

Sorry didn't explain in more detail
I have elk01, elk02, elk03 (master/data node for elasticsearch, all three running kibana as well)
version 7.1.1

now on dns they are
elk01 <--> ip1 (forward and reverse set
elk02 <--> ip2
elk03 <--> ip3

now set up one name which point to all three ip.
elk -> ip1
elk -> ip2
elk -> ip3

Goal here is that user can type elk:5601 and should able to hit any server.

No I don't have any setup. in kibana.yml

I have identical setup on other four center and it works on three center without any problem. I have check and recheck my setup, compare config with other setup and they all looks same.

@jbudz any insight about it? just giving bump to thread

You need to set this.
I too have 3 kibanas behind a load balancer, in your case its DNS round robin but the end result is the same, requests can move around the kibana hosts if you have more than 1.
This setting is required when you have more than 1 kibana or else they can’t all decrypt each others cookies. Which breaks the frontend when you randomly go to another kibana instance.

1 Like

What is that load balancer? is it some kind of app?

My ES cluster runs on AWS ECS (docker containers) and my Kibana too.
So I have 3 Kibana docker containers running that are considered a "Service" with desired count = 3 in AWS ECS and they are fronted by an AWS ELB which load balances to all 3 kibana containers.

Great it works with that key.

even when I shutdown kibana on one server it fails over to second and without delay I get response and result back.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.