Thanks for the reply.
This is the error which I got
[2012-11-21 00:26:17,510][WARN ][index.engine.robin ] [Primus]
[cms_audit][0] failed engine
java.lang.OutOfMemoryError: Java heap space
at
org.apache.lucene.util.PagedBytes$PagedBytesDataOutput.writeBytes(PagedBytes.java:502)
at org.apache.lucene.store.DataOutput.writeString(DataOutput.java:114)
at
org.apache.lucene.index.TermInfosReaderIndex.(TermInfosReaderIndex.java:86)
at org.apache.lucene.index.TermInfosReader.(TermInfosReader.java:116)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:83)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:116)
at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:696)
at
org.apache.lucene.index.IndexWriter$ReaderPool.getReadOnlyClone(IndexWriter.java:654)
at org.apache.lucene.index.DirectoryReader.(DirectoryReader.java:142)
at
org.apache.lucene.index.ReadOnlyDirectoryReader.(ReadOnlyDirectoryReader.java:36)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:451)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:399)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:296)
at org.apache.lucene.search.SearcherManager.(SearcherManager.java:82)
at
org.elasticsearch.index.engine.robin.RobinEngine.buildSearchManager(RobinEngine.java:1371)
at
org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:838)
at
org.elasticsearch.index.engine.robin.RobinEngine.updateIndexingBufferSize(RobinEngine.java:221)
at
org.elasticsearch.indices.memory.IndexingMemoryController$ShardsIndicesStatusChecker.run(IndexingMemoryController.java:178)
at
org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:297)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
[2012-11-21 00:26:17,945][WARN ][index.engine.robin ] [Primus]
[cms_audit][0] failed to flush after setting shard to inactive
org.elasticsearch.index.engine.FlushFailedEngineException: [cms_audit][0]
Flush failed
at
org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:844)
at
org.elasticsearch.index.engine.robin.RobinEngine.updateIndexingBufferSize(RobinEngine.java:221)
at
org.elasticsearch.indices.memory.IndexingMemoryController$ShardsIndicesStatusChecker.run(IndexingMemoryController.java:178)
at
org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:297)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
I am using ruby script to transfer my data to Elasticsearch. Below is the
code for that.
RestClient.post(HOST+"/_bulk", message);
The RestClient is in the rest_client library of ruby.
The bulk api batch size is 100 docs.
Besides Please explain why do you feel that the top output is reasonable?
My interpretation(which might be wrong) was 10GB RAM must be large enough
for the elasticsearch to work, and if it is allocating more than that, it
must also deallocate the memory. because RAM allocation was continuously
increasing when I was sending data to the server and it did not reduced or
get deallocated when I stopped sending the data.
On Monday, 26 November 2012 15:08:50 UTC+5:30, Jörg Prante wrote:
Can you provide us with details of the "out of heap space" messages?
Is is "OutOfMemoryException"?
What kind of API do you use? What client?
Since your top output looks reasonable, there might be other
causes. OutOfMemoryException is also relevant to socket resources for
example, if you don't carefully manage the clients.
It's not the cache, the cache is only for queries.
Best regards,
Jörg
On Monday, 26 November 2012 15:08:50 UTC+5:30, Jörg Prante wrote:
Can you provide us with details of the "out of heap space" messages?
Is is "OutOfMemoryException"?
What kind of API do you use? What client?
Since your top output looks reasonable, there might be other
causes. OutOfMemoryException is also relevant to socket resources for
example, if you don't carefully manage the clients.
It's not the cache, the cache is only for queries.
Best regards,
Jörg
On Monday, 26 November 2012 15:08:50 UTC+5:30, Jörg Prante wrote:
Can you provide us with details of the "out of heap space" messages?
Is is "OutOfMemoryException"?
What kind of API do you use? What client?
Since your top output looks reasonable, there might be other
causes. OutOfMemoryException is also relevant to socket resources for
example, if you don't carefully manage the clients.
It's not the cache, the cache is only for queries.
Best regards,
Jörg
--