We have been running under constant memory pressure on our ES nodes. Upon analyzing it appears that segment is consuming more than 50% of the available heap. Is there a way to configure the amount of memory that can be used for caching segment data?
We though about reducing shards, but then that would lead to very large shards, our index is about 1TB. Is there a better practice / strategy to use in such cases?
We are running 1.7.