Memory allocation to elasticsearch component for a cluster setup with n nodes

Hi,

I am trying to setup cluster but i'm confused that whether index size,shard size depend upon "Ram" or "disk" size.
Do I need to divide disk memory or RAM of a cluster between shards.

I do not fully understand the question. First and foremost you need the space on disk to index your data, as it is always written. However every shard that lives in your cluster also requires some memory, so they are not 'free'.

The underlying question is more about sizing I suppose - and that is a complex question because of the generic nature of Elasticsearch. You need to take ingestion rates, query rates, query complexity (search vs. aggregations vs. highlighting), size of a single document, mapping configuration and more factors into account, so there is no simple formula.

Thanks for Reply.I'm actually not clear that how shard size is related with RAM.
For e.g. I have 3 nodes with 1TB disk space +16GB RAM each ,So If I have to store 1.5TB document in one index then .

1)what should be maximum shard size and Do I have to take RAM also in consideration for shard size allocation?
2)What should be Index Size?

Take your time and read the whole Designing for scale chapter in the definitive guide. This is a complex topic and cannot be answered with a few sentences.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.