I am working on centos5 and I run elasticsearch with version 1.0.0 with -Xms808m
-Xmx808m -Xss256kparameters. There are 17 index and total 30200583 docs.
Each index's docs count between 1000000 and 2000000. I create request query
like ( each index have date field );
However when I send query on elasticsearch-head tool for last 50 rows like;
{
...
...
...
"from": 30200533,
"size": "50"
}
It does not give a response and throw exception like;
ava.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:247)
at org.apache.lucene.store.Directory.copy(Directory.java:186)
at org.elasticsearch.index.store.Store$StoreDirectory.copy(Store.java:348)
at org.apache.lucene.store.TrackingDirectoryWrapper.copy(TrackingDirectoryWrapper.java:50)
at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4596)
at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:535)
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:506)
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:616)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:370)
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:285)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:260)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:250)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
at org.apache.lucene.search.XSearcherManager.refreshIfNeeded(XSearcherManager.java:123)
at org.apache.lucene.search.XSearcherManager.refreshIfNeeded(XSearcherManager.java:59)
at org.apache.lucene.search.XReferenceManager.doMaybeRefresh(XReferenceManager.java:180)
at org.apache.lucene.search.XReferenceManager.maybeRefresh(XReferenceManager.java:229)
at org.elasticsearch.index.engine.internal.InternalEngine.refresh(InternalEngine.java:730)
at org.elasticsearch.index.shard.service.InternalIndexShard.refresh(InternalIndexShard.java:477)
at org.elasticsearch.index.shard.service.InternalIndexShard$EngineRefresher$1.run(InternalIndexShard.java:924)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
What is the problem? Is it not enough java heap space or does my query
cause this heap space error?
I asked same question in stackoverflow. Soutions ,which recommended in
stackoverflow, are not applicable for me. Anyone can give another solutions
for this problem?
I am working on centos5 and I run elasticsearch with version 1.0.0
with -Xms808m -Xmx808m -Xss256kparameters. There are 17 index and total
30200583 docs. Each index's docs count between 1000000 and 2000000. I
create request query like ( each index have date field );
However when I send query on elasticsearch-head tool for last 50 rows like;
{
...
...
...
"from": 30200533,
"size": "50"
}
It does not give a response and throw exception like;
ava.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:247)
at org.apache.lucene.store.Directory.copy(Directory.java:186)
at org.elasticsearch.index.store.Store$StoreDirectory.copy(Store.java:348)
at org.apache.lucene.store.TrackingDirectoryWrapper.copy(TrackingDirectoryWrapper.java:50)
at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4596)
at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:535)
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:506)
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:616)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:370)
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:285)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:260)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:250)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
at org.apache.lucene.search.XSearcherManager.refreshIfNeeded(XSearcherManager.java:123)
at org.apache.lucene.search.XSearcherManager.refreshIfNeeded(XSearcherManager.java:59)
at org.apache.lucene.search.XReferenceManager.doMaybeRefresh(XReferenceManager.java:180)
at org.apache.lucene.search.XReferenceManager.maybeRefresh(XReferenceManager.java:229)
at org.elasticsearch.index.engine.internal.InternalEngine.refresh(InternalEngine.java:730)
at org.elasticsearch.index.shard.service.InternalIndexShard.refresh(InternalIndexShard.java:477)
at org.elasticsearch.index.shard.service.InternalIndexShard$EngineRefresher$1.run(InternalIndexShard.java:924)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
What is the problem? Is it not enough java heap space or does my query
cause this heap space error?
I asked same question in stackoverflow. Soutions ,which recommended in
stackoverflow, are not applicable for me. Anyone can give another solutions
for this problem?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.