1TB index. Setup question

I've got a physical node with 384G RAM and a 1TB index to put it on.

I've read into this https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html and this piece looks like my situation:

Are you doing a lot of sorting/aggregations? Are most of your aggregations on numerics, dates, geo_points and not_analyzed strings? You’re in luck! Give Elasticsearch somewhere from 4-32 GB of memory and leave the rest for the OS to cache doc values in memory.

My index is very simple:

  • there is only one analyzed field (as term_vector)
  • there are some integer/date/string (not_analyzed) fields

Most of activity would be like "filter by one of the fields, sort by one of the not_analyzed fields".

The questions is: is it really a good idea running a single elastic node with 32G heap in case of 1TB index?
Mby running two dockered elastic nodes 32G heap each is more feasible?

If you are gunna run multiple nodes, start with 3, otherwise you risk split brain.

That said, try one, then 3, see how it goes. You definitely have the RAM for it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.