This works fine but I still don't really understand the setup. For example. When I want to connect from my logstash to my ES. To which node do I have to connect? (elastichsearch1 and elastichsearch2 or only elasticsearch1?. (I make the connection internal in docker).
Only ES1 has exposed its ports so when I perform API commands it's always on ES node1. Is ES node2 taking all the info and is it some pure replication of ES1? And when ES1 goes down will ES2 take it over?
(Still connect to 1 but route immediatly to 2 or how is this working?)
I really need a full explanation about the 2 nodes of ES in docker and how they are working together and how I'm supposed to communicate with this cluster.
Thanks for the answers. Okay I can add a second ES name in logstash and I can add a third node. But I'm still not sure how to work with the exposed ports. When I want to add a user I want to do that once, (so on one ES) and after that I hope it's created over the entire cluster. So I want to expose only 1 port of 1 of the ES-instances. Is this fair enough?
Keep 2 elastic's internal and 1 exposed (but is of course also internal so actually I have 3 internals to connect to from logstash). On that 1 I can perform my curl commands to create roles/users etc. (known also by the other 3 instances).
Okay, so it's also enough to describe one of the elastic containers in my logstash config and it will be distributed in the cluster as long as this elastic node remains alive? (specifying more ES hosts in logstash makes it HA?) Thanks