Thanks both for your input.
@Jörg:
I understand ES uses all available process memory. I meant jvm memory
usage, which it tries to reclaims when it exceeds 75% (due
to -XX:CMSInitiatingOccupancyFraction=75) option.
I don't know what kind of queries use Lucene FST, could you be kind enough
to explain. I also didn't know about bloom filter and it's
memory usage, is their a way to check how much memory usage it's adding.
I will update JVM, but the issue is the same bulk indexing was not making
node out of memory in v0.90.7, it's doing it with v0.90.11
@*Adrien *:
I will play with merge throttling to speed it up. After many hours, even
after merge operations are finished, the memory still wasn't
reclaimed so I am more worried about that.
fyi, from ES logs -
[2014-02-14 10:09:54,109][WARN ][monitor.jvm ] [machine1.node2]
[gc][old][75611][2970] duration [43s], collections [1]/[44.1s], total [43s
]/[55.5m], memory [11.3gb]->[10.6gb]/[11.8gb], all_pools {[young] [454.6mb
]->[10.4mb]/[865.3mb]}{[survivor] [108.1mb]->[0b]/[108.1mb]}{[old] [10.8gb
]->[10.6gb]/[10.9gb]}
And from /_cluster/stats request -
"fielddata" : {
"memory_size" : "3.6gb",
"memory_size_in_bytes" : 3881191105,
"evictions" : 0
},
"filter_cache" : {
"memory_size" : "622.4mb",
"memory_size_in_bytes" : 652677071,
"evictions" : 0
},
"id_cache" : {
"memory_size" : "2gb",
"memory_size_in_bytes" : 2170019078
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 789,
"memory" : "3.4gb",
"memory_in_bytes" : 3730255779
}
If node is running out of memory, shouldn't ES be reclaiming id_cache or
fielddata ?
On Thursday, February 13, 2014 10:19:28 AM UTC-5, Ankush Jhalani wrote:
We have a single node, 12GB, 16 core ES instance to which we are 12
threads bulk indexing into a 12shard index. Each thread sends a request of
size kb to couple megabytes. The thread bulk queue_size is increased from
default 50 to 100.
With v0.90.11, we are noticing that the jvm memory usage keeps growing
slowly and doesn't go down, gc runs frequently but doesn't free up much
memory. From debug logs, it seems the segment merges are happening. However
even after we stop indexing, for many hours the instance is busy doing
segment merges. Sample gist from hot threads I ran couple minutes apart - (
Hot threads for a node doing merge segments very slowly · GitHub). Even after 16 hours and little
use on the machine, the jvm memory usage is about 80% (CMS should run at
75%) and nodes stats show is running very frequently.
If we don't stop indexing, eventually after 60-70GB indexing the instance
goes out of memory. This seems like a memory leak, we didn't face this
issue with 0.90.7 (though we were probably using a 6 thread process for
bulk indexing).
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/99b4d682-5d0d-4255-bf5f-ce0561b111be%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.