Hi,
we are running an EFK stack on top of k8s clusters in AWS. To deploy both elastic and kibana we use Kubernetes operator
The installation contains several elasticsearch pods, but one kibana pod. The kibana is exposed to the outside via k8s ingress. The ingress is exposed over AWS NLB which does the TLS termination.
Since we are using the EC2 spot instance type, the node running kibana comes back to the market from time to time, and sometimes a new kibana is not created while the old one is already terminated, so the kibana becomes unavailable for a while.
That's not critical but creates some bad user experience working with kibana and we want to fix that. A possible solution would be to increase the number of kibana pods from 1 to 2...
I was looking into documentation and found the documentation on how to run kibana in production
It indicates that to spread the load across multiple kibana instances we need to modify the configuration and make some settings unique and some similar. I think we will figure out how to update that configuration over the kibana operator, but for me, some settings do not make any sense, for example, what is the reason to use uniq server.port
parameter? So it seems to be the documentation unrelated to k8s and intended more for another installation...
Does anyone has any experience setting up multiple Kibana pods for such HA? Have you had any issue with that approach?
k8s version is 1.21
elastic/kibana - 7.16.1
Many thanks in adavance