we recently started running into network capacity limitations and trying to figure out if our deployment is what it is intended to be:
- we have 4 elastic search data nodes
- all of our web servers are 64gb machines and have 2 LAN interfaces, one for public and one for internal network switch
- all web servers have client running on the machine
is this a correct setup to have client running on each web server? I'm assuming the benefit is to rely on internal mechanisms of elastic search to relay requests to proper server based on what is up/down
we have a few very heavy aggregations in the public-facing website. Do aggregate requests pull in data to client node first, prior to performing actual aggregation operation or do aggregations happen on the data server itself before serving data off to the client node? - My understanding is that client node is the one that performs the aggregation, hence overloading our internal network whenever heavy aggs operations are being performed?
Thank you for any input