Hi - we have 3 clusters in our prod spread across three diff Data centers.
Each cluster setup:
One instance of Kibana -> 2 client nodes for load balancing -> data nodes
Since dashboards and visualizations are local to the cluster where they are created, we always expose only instance of Kibana (DC 1) to end users. Cross-cluster search enabled and so they can view results from other clusters as well through this instance of Kibana.
Lately we have been noticing some issues with Kibana being too slow. And so trying to find options to bring in the other two instances of Kibana and load balance the requests. All this while we can keep the dashboards/visualizations in sync as well.
Reaching out here to see if anyone has ideas on how to implement this requirement.
Appreciate your time,
Yes, I did look into this. But this refers only to the case where multiple Kibana connects to the same elastic cluster. However, I was referring more towards this approach:
Users -> LB URL -> three diff Kibana instances connecting to three diff ES clusters (part of the LB pool) -> Client nodes unique to each cluster -> Data nodes unique to each cluster
This way, even if Kibana/client nodes go down in one DC - the users should still not see a downtime + load is balanced across all the defined DCs by utilizing the client nodes in all DCs.
I guess we could simply configure the LB pool with all three Kibana URLs, but how to keep the dashboards/visualizations in sync?
Have you tried using cross cluster search? In order to keep all kibana instances in sync they should be using the same Elasticsearch instance. Then, you can access data from the remote clusters with cross cluster search.
Yes, we have been using cross cluster search for quite some time now. The problems we run into are mostly slower performance/node crashes. This could be due to the fact that many concurrent users (say, 25-30) attempting to the same Kibana instance and refresh dashboards every 5-10s. Since all those requests flow through the client nodes in the same cluster where Kibana is connected to - they are being pounded and so causing slowness/crashes.
Users -> DC1 Kibana -> DC1 Client nodes (2) -> Gets data from cross clustered indices
Users -> DC1/2/3 Kibana -> DC1/2/3 Client nodes -> Get data from the same cross clustered indices
This way, we can avoid downtime even if Kibana/client nodes in one DC go down.
I guess we could configure LB pool with all the Kibana instances in it, so users could hit any of the available Kibana instances (round robin) – but then how can we keep the dashboards/visualizations in sync?
Kibana always connects to a single cluster and stores it's dashboards/visualizations on that cluster. So you could horizontally scale Kibana in DC1 by having:
Users -> DC1 Kibana 1 -> DC1 ...
Users -> DC1 Kibana 2 -> DC1 ...
Users -> DC1 Kibana 3 -> DC1 ...
Or you could point all three Kibana's to the same cluster:
Users -> DC1 Kibana -> DC1 ...
Users -> DC2 Kibana -> DC1 ...
Users -> DC3 Kibana -> DC1 ...
In this case you would need to export all saved objects from the DC2/3 Kibana nodes before changing their configuration and then import those saved objects into the DC1 Kibana. If you want to keep the saved objects separate you could setup spaces for that.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.