Huge Segments filling up heap

In our cluster, We have 6 node each with 30GB heap. We have around 30 indices each with few Millions of docs. Index structure is very very small with just 10 fileds.
Each index size is hardly 5GB.

Issue: After ingesting all 30 indices, _cat/segments shows each segment of size 3-5GB. With a simple search load, we are hitting 98% heap usage and also hitting parent circuit breakers. Basically the issue is with huge segment size and just loading few segments to memory, we hit 30GB heap limit.

Anyone can suggest what can be tried to reduce heap usages. Do you think we need to reduce default segment size to small?

I don't think that's correct. 5GiB is a normal size for segments, and searching doesn't involve loading the whole segment into memory anyway. There's certainly something wrong, but it's not your segment size that is a problem.

Hi David,
thanks for info. In my case out of 5GB segment, 3GB is getting loaded to memory based on _cat/segments data. So hitting 10 such indices filling up 30GB memory.

Which version of Elasticsearch are you using?

What is the mapping of these indices?

What is the full output of the cluster stats API?

As David pointed out, segments are not loaded into memory, so what you think you are observing is not correct. There are however some type of mappings that can result in high memory usage, which is why I asked for mappings and cluster stats.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.