Elasticsearch &"Load Balance One Cluster Node"

Hello everyone, hope you are all fine :slight_smile:

Few questions today !

Context :

Someone in my compagny is using Shiny (R) on our Stack ELK Clusters, the thing is, that he need one of our 3 frontend machines to be at 100% SLA (Alive all time if you prefer), else it's application "plugged" into the cluster will be down. And we MUST prevent this.

So my question is : is it possible, by any "way" to setup a solution that make this SPECIFIC machine (like machineA) load balanced (?) by machineB and machineC in case she's down ?

We already have set a Load Balancer (F5) for these three frontend ( They run kibana in addition to Elastic CLIENT)

It may not be very clear, I agree
I would appreciate any feedback and advice

Cordially, Ben.

It's normally something you try to solve on the client side by giving it 3 nodes to talk to.

All our clients support that.

If you have a load balancer in front of the Elasticsearch REST endpoint (9200), then you can also use it.

Hi David,

All nodes are currently configured for Load Balacing & HAwith Elastic params (discover etc..)

We found the solution of our Issue, as you mentioned in your reply, we will do some Load Balacing on the 9200 Port.

With that, we will add a DNS entry name, like : client-elk.domain.tld and add all nodes needed with 9200 Port.

Then, all applications needed to be "plugged" on the Cluster will use the client-elk.domain.tld:9200 and it will be "always" available as we need.

Thanks you for you reply and good advice.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.