Node/ cluster consuming too much memory

we have es configuration like
3 master node(16GB, allocation 15 GB , hdd 2 TB)
3 data node (64GB, allocation 12 GB , hdd 2 TB)
2 coord node(32GB, allocation 15 GB , hdd 2 TB)

Replica set to 2 . Data node have 2 TB data.
219 indices over all ( 657 shards)
each index have 1 shard each

When I am indexing I observed its reaching to max limit.

I am not sure what is there in my memory , cluster stats says there are ~12k segments.

  1. Is segments in memory that eating memory and causing circuit exception.
    segments: {
  • count: 12472,

  • memory_in_bytes: 25171110632,

  • terms_memory_in_bytes: 22302468316,

  • stored_fields_memory_in_bytes: 2767848056,

  • term_vectors_memory_in_bytes: 0,

  • norms_memory_in_bytes: 18428096,

  • points_memory_in_bytes: 77911792,

  • doc_values_memory_in_bytes: 4454372,

  • index_writer_memory_in_bytes: 0,

  • version_map_memory_in_bytes: 0,

  • fixed_bit_set_memory_in_bytes: 341385592,

  • max_unsafe_auto_id_timestamp: -1,

  • file_sizes: { }

  1. Heap dump shows some Class B(java.base@127.1.0) consuming 8GB memory and never flush off
  2. Is it GC problem ?

Any help appreciated.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.