CPU affinity when running multiple nodes on a single host

Hi folks,

If running multiple Elasticsearch instances on a well resourced server (256GB RAM, 48 cores) , assuming the 'processors' count in elasticsearch.yml is set to a sensible value (max cores / number ES instances ) , is there a benefit to going further and also pinning the ES processes to specific CPUS (using CPUAffinity option in systemd unit file for example), or is it better to let the OS manage the thread scheduling?

Eg on a host running 3 instances, the processors count would be set to 16 for each host in the Elasticsearch config, ensuring each process only has access to a subset of CPU cores, but technically they can still share cores vs CPU pinning, where instance one is pinned to cores 0-7 , instance 2 pinned to cores 8-15 etc...

Any advice in this area would be appreciated, cheers.

I've not heard of any benchmarks in this area, and even if I had I suspect the answer would still depend on your specific setup and workload. Probably best to test this yourself.

No problem, thanks for the response. The official docs alluded to CPU pinning (see quote from the thread pool module doc below), but I haven't seen any further reference to specific implementation details, so I was wondering if I was missing any best practices in this area by not explicitly pinning instances to specific cores.

  1. If you are running multiple instances of Elasticsearch on the same host but want want Elasticsearch to size its thread pools as if it only has a fraction of the CPU, you should override the node.processors setting to the desired fraction, for example, if you’re running two instances of Elasticsearch on a 16-core machine, set node.processors to 8. Note that this is an expert-level use case and there’s a lot more involved than just setting the node.processors setting as there are other considerations like changing the number of garbage collector threads, pinning processes to cores, and so on.

Indeed, it's plausible that CPU pinning might have a performance benefit, but it's also plausible that the OS is clever enough to do that for you anyway, and there's a risk that overconstraining the thread scheduler leads to worse performance. "Expert-level" means it's too complicated to give much general guidance in this area and you're expected to investigate this yourself.

Understood, some investigation required! Thanks

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.