How to reduce terms_memory_in_bytes? Help Help Help

"segments" : {
"count" : 11954,
"memory_in_bytes" : 12613933974,
"terms_memory_in_bytes" : 10524364098,
"stored_fields_memory_in_bytes" : 2057138120,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 0,
"doc_values_memory_in_bytes" : 32431756,
"index_writer_memory_in_bytes" : 129807028,
"index_writer_max_memory_in_bytes" : 14773611532,
"version_map_memory_in_bytes" : 26636779,
"fixed_bit_set_memory_in_bytes" : 0
}

As the topic

1 Like

What version are you on?

2.3.1 tks

@warkolm You have any suggestions?

Are you using doc values wherever is possible?

yes ! all field use doc_values

use store is The default type,It's cause by Lucene term dictionary ?

@warkolm Help Help Help

You have a lot of shards, how much data in the cluster?

primary_shards : 12055
active_shards : 24110
100TB Data in cluster
24 data node

You're probably at the limits of what you can do with that many nodes then.

You may want to look at reducing the shard count too, that's only ~4GB a shard, which is very small.

too big shard will create other problems ?

What other problems?

Index performance 、 Recovery rate

Generally recommended shard size range is?

Less than 50GB.

Are all your shards an even size? Do you have some really small ones with hardly any data, and then some that are really larger?

The vast majority in 2-16 g

You should definitely increase the size.

you are so nice ,tks!!!

Increase shard size,but don't solve this problem, only reduce 5% memory in use。
I want to know content in terms_memory, when and how it's generated,it's can LRU?

1 Like