ElasticSearch on giant compute nodes

The standard paradigm which I see is usually recommended for ES (mainly for query purpose) is a collection or cluster of nodes - the nodes being typically SSD, the RAM usually recommended as 64 GB, with shard size not exceeding 50 GB, and the overall node storage not more than 4 TB.

However, we have a scenario where we have supercomputers with large nodes in it - for example, one supercomputer has 39 nodes, each node having a powerful CPU (Intel Xeon Cascade Lake, 24 core, 2.8 GHz), say 768 GB of RAM, 480 GB of SSD.

Suppose 10 of them are made into a cluster. Thus the combined storage becomes around 5 TB. How well can this cluster perform for search queries, including latency? Does this setup work, or will it be a bad idea?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.