Maximum cpu cores for a data node

Hi there,

i’m just wondering. Is there a maximum number of CPU cores for a data node? Because, for the memory, it is limited to 30-32 GB Heap, right? But does CPU have the same thing? or can we scale it up as high as we want?

one more. For a dedicated master node and coordinating node, what’s the appropriate size of heap? Is it still 50% of RAM, or can I push it to 80% and it still functions properly?

Thanks

@yuswanul
Elasticsearch (and underlying Lucene) can technically use as many CPU cores as the server offers based on load. Within Elasticsearch Each shard processes queries and indexing with one thread (i.e., one CPU core) per shard. If concurrent queries/shards exist, more CPU can be utilized, but a low number of active shards or little parallel activity means surplus CPU remains idle.

I have used 32 Core machines for larger data sets and throughputs and it works well.

Regarding heap usage : Typical heap guidance is to use 50% of RAM, not exceeding 32GB to maintain JVM pointer efficiency. But You may go up to 75%-80% of available RAM, as master and coordinator nodes hold less data and are less impacted by OS-level filesystem caching demands. But please make sure to test this out once in Stg/QA before doing changes in Prod.

1 Like