Monitoring multiple kibana instances

Hi everyone,

I have a production cluster (version 5.6.4) and a separate monitoring cluster (version 6.2). I noticed that if I enter in the monitoring section and select the production cluster i the Kibana section I see only one instance when I have multiple of them.
I deploy kibana with docker and publish a different port for each one but keeping the internal port the standard one. Also, each kibana has a different index e.g. .kibana-name).
When I take down the showed kibana instance, another one of the running kibana instances showed up (only one). If I repeat it than a 3rd one shows up. I noticed that the Kibana instances are selected in alphabetic order.

I guess that the issue could be that the monitoring code use a dictionary and the key is (IP:PORT) which in my case is the same for each instance.

Is there a setting to fix this or is it a bug in monitoring?

Best Regards,

You're close. The Kibana listing in the Monitoring UI uses the Kibana server UUID as the term that it aggregates on. When multiple instances show as one, it's usually because the Kibana package was copied to other hosts after the data/uuid file was generated, and all of the instances are using the same UUID.

To remedy the issue, just delete the data/uuid file for each instance. Historical data for these instances will be gone, but will start getting collected correctly after restarting the Kibana servers.

Here's an example of what a uuid file looks like on one of my Kibana installations:

$ cat data/uuid
e09c3c56-7e82-4ac9-9dbe-b270881396e3

Thanks Tim for the explanation, this clear things up. However, the bigger issue remains.

I deploy using docker multiple kibana instances to the same host, so as soon I redeploy this issue will come back again.
Do I have to regenerate the UUID everytime I do a deployment?

I do not understand why the UUID is the same in each container, is it computed from the (IP:PORT) tuple? if so, would not be better to consider also the name?

Cheers,

Are you building the docker container or using ours from docker.elastic.co? The container should not contain the data/uuid file, and a new one will be generated when the container starts up.

Hi,

Actually, I am building my image based on the official one (FROM docker.elastic.co/kibana/kibana:5.6.4) I guess that the fact that I run kibana 2>$1 | grep -m 1 "Optimization of .* complete" so that the image is already optimized it is the cause of the UUID equal for every instance.

Thanks for the help.

Just for completion, adding RUN rm data/uuid at the end of my dockerfile solved the problem. In this case, every time a new container is created a new UUID is generated when kibana starts.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.