I ran a load test on a single node (with a default configuration) and
found that there might be a memory leak in the SoftFilterCache.
The test used 20 concurrent users that added a document once every ~5
seconds.
10 additional concurrent users simulated searching every ~1 minute.
After about 12 hours the JVM heap ran out of space (1G) and
ElasticSearch went into repeating GC cycles that caused it to stop
responding.
The heap dump shows that the problematic file is
org.elasticsearch.index.cache.filter.soft.SoftFilterCache ->
org.elasticsearch.util.concurrent.highscalelib.NonBlockingHashMap
holding 26600 org.apache.lucene.index.SegmentReader$CoreReaders which
occupied about 660M.
The leak is not really there, its just that the cache evicts when readers
are not used anymore. I think I have fixed this problem in master, can you
give it a go?
I ran a load test on a single node (with a default configuration) and
found that there might be a memory leak in the SoftFilterCache.
The test used 20 concurrent users that added a document once every ~5
seconds.
10 additional concurrent users simulated searching every ~1 minute.
After about 12 hours the JVM heap ran out of space (1G) and
Elasticsearch went into repeating GC cycles that caused it to stop
responding.
The heap dump shows that the problematic file is
org.elasticsearch.index.cache.filter.soft.SoftFilterCache ->
org.elasticsearch.util.concurrent.highscalelib.NonBlockingHashMap
holding 26600 org.apache.lucene.index.SegmentReader$CoreReaders which
occupied about 660M.
The leak is not really there, its just that the cache evicts when readers
are not used anymore. I think I have fixed this problem in master, can you
give it a go?
I ran a load test on a single node (with a default configuration) and
found that there might be a memory leak in the SoftFilterCache.
The test used 20 concurrent users that added a document once every ~5
seconds.
10 additional concurrent users simulated searching every ~1 minute.
After about 12 hours the JVM heap ran out of space (1G) and
Elasticsearch went into repeating GC cycles that caused it to stop
responding.
The heap dump shows that the problematic file is
org.elasticsearch.index.cache.filter.soft.SoftFilterCache ->
org.elasticsearch.util.concurrent.highscalelib.NonBlockingHashMap
holding 26600 org.apache.lucene.index.SegmentReader$CoreReaders which
occupied about 660M.
My store configuration is:
index :
store:
fs:
memory:
enabled: true
I will try to do another test where the memory.enabled=false to see
how it affects the results.
In the mean time, do you have any suggestions on how this can be
resolved?
The leak is not really there, its just that the cache evicts when readers
are not used anymore. I think I have fixed this problem in master, can you
give it a go?
I ran a load test on a single node (with a default configuration) and
found that there might be a memory leak in the SoftFilterCache.
The test used 20 concurrent users that added a document once every ~5
seconds.
10 additional concurrent users simulated searching every ~1 minute.
After about 12 hours the JVM heap ran out of space (1G) and
Elasticsearch went into repeating GC cycles that caused it to stop
responding.
The heap dump shows that the problematic file is
org.elasticsearch.index.cache.filter.soft.SoftFilterCache ->
org.elasticsearch.util.concurrent.highscalelib.NonBlockingHashMap
holding 26600 org.apache.lucene.index.SegmentReader$CoreReaders which
occupied about 660M.
My store configuration is:
index :
store:
fs:
memory:
enabled: true
I will try to do another test where the memory.enabled=false to see
how it affects the results.
In the mean time, do you have any suggestions on how this can be
resolved?
The leak is not really there, its just that the cache evicts when
readers
are not used anymore. I think I have fixed this problem in master, can
you
give it a go?
I ran a load test on a single node (with a default configuration) and
found that there might be a memory leak in the SoftFilterCache.
The test used 20 concurrent users that added a document once every ~5
seconds.
10 additional concurrent users simulated searching every ~1 minute.
After about 12 hours the JVM heap ran out of space (1G) and
Elasticsearch went into repeating GC cycles that caused it to stop
responding.
The heap dump shows that the problematic file is
org.elasticsearch.index.cache.filter.soft.SoftFilterCache ->
org.elasticsearch.util.concurrent.highscalelib.NonBlockingHashMap
holding 26600 org.apache.lucene.index.SegmentReader$CoreReaders which
occupied about 660M.
I just ran some stress tests before the upcoming 0.7 release and I don't see
what you are getting (ran 5 billion index / search requests on a 256m single
node JVM and it seems to be good - single index). So, I would need to know
exactly what you do against elasticsearch to be able to recreate this... .
By the way, if you know jmeter, I have several benchmark scripts that I use
(its in the source repo), if you can simulate it using a custom script, it
would be great.
My store configuration is:
index :
store:
fs:
memory:
enabled: true
I will try to do another test where the memory.enabled=false to see
how it affects the results.
In the mean time, do you have any suggestions on how this can be
resolved?
The leak is not really there, its just that the cache evicts when
readers
are not used anymore. I think I have fixed this problem in master, can
you
give it a go?
I ran a load test on a single node (with a default configuration)
and
found that there might be a memory leak in the SoftFilterCache.
The test used 20 concurrent users that added a document once every
~5
seconds.
10 additional concurrent users simulated searching every ~1 minute.
After about 12 hours the JVM heap ran out of space (1G) and
Elasticsearch went into repeating GC cycles that caused it to stop
responding.
The heap dump shows that the problematic file is
org.elasticsearch.index.cache.filter.soft.SoftFilterCache ->
org.elasticsearch.util.concurrent.highscalelib.NonBlockingHashMap
holding 26600 org.apache.lucene.index.SegmentReader$CoreReaders
which
occupied about 660M.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.