liusy
(sy)
May 4, 2016, 8:20am
1
"segments" : {
"count" : 11954,
"memory_in_bytes" : 12613933974,
"terms_memory_in_bytes" : 10524364098,
"stored_fields_memory_in_bytes" : 2057138120,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 0,
"doc_values_memory_in_bytes" : 32431756,
"index_writer_memory_in_bytes" : 129807028,
"index_writer_max_memory_in_bytes" : 14773611532,
"version_map_memory_in_bytes" : 26636779,
"fixed_bit_set_memory_in_bytes" : 0
}
As the topic
1 Like
liusy
(sy)
May 4, 2016, 8:57am
4
@warkolm You have any suggestions?
warkolm
(Mark Walkom)
May 4, 2016, 9:34am
5
Are you using doc values wherever is possible?
liusy
(sy)
May 4, 2016, 9:51am
6
yes ! all field use doc_values
liusy
(sy)
May 4, 2016, 9:57am
7
use store is The default type,It's cause by Lucene term dictionary ?
warkolm
(Mark Walkom)
May 7, 2016, 12:41am
9
You have a lot of shards, how much data in the cluster?
liusy
(sy)
May 7, 2016, 6:20am
10
primary_shards : 12055
active_shards : 24110
100TB Data in cluster
24 data node
warkolm
(Mark Walkom)
May 7, 2016, 6:23am
11
You're probably at the limits of what you can do with that many nodes then.
You may want to look at reducing the shard count too, that's only ~4GB a shard, which is very small.
liusy
(sy)
May 7, 2016, 6:41am
12
too big shard will create other problems ?
liusy
(sy)
May 7, 2016, 6:56am
14
Index performance 、 Recovery rate
liusy
(sy)
May 7, 2016, 6:57am
15
Generally recommended shard size range is?
warkolm
(Mark Walkom)
May 7, 2016, 7:00am
16
Less than 50GB.
Are all your shards an even size? Do you have some really small ones with hardly any data, and then some that are really larger?
liusy
(sy)
May 7, 2016, 7:02am
17
The vast majority in 2-16 g
warkolm
(Mark Walkom)
May 7, 2016, 7:08am
18
You should definitely increase the size.
liusy
(sy)
May 9, 2016, 5:31am
20
Increase shard size,but don't solve this problem, only reduce 5% memory in use。
I want to know content in terms_memory, when and how it's generated,it's can LRU?
1 Like