We recently upgraded to v0.90.11 on a 12 GB memory (10.8 GB for old) node,
and notice that slowly jvm memory used kept increasing and eventually ES
went OOM.
I ran stats for the biggest index in ES which was the main culprit of using
the memory -
"filter_cache" : {
"memory_size" : "252.2mb",
"memory_size_in_bytes" : 264546840,
"evictions" : 0
},
"id_cache" : {
"memory_size" : "215.4mb",
"memory_size_in_bytes" : 225963916
},
"fielddata" : {
"memory_size" : "3.2gb",
"memory_size_in_bytes" : 3479467264,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 333,
"memory" : "5.1gb",
"memory_in_bytes" : 5561471705
}
It didn't seem a single ES request which exceeded memory but an eventual
increase over many hours.
- What is the segments memory used for?
- What could cause it to keep increasing (I ran hot threads and see segment
merge going on) - I see debug logs in gc running and not able to reclaim memory. Why is the
memory not reclaimed by garbage collector?
Thanks
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/91655bb6-ae48-45b0-934c-bf7df33c5a5d%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.