OutOfMemoryError: Direct buffer memory

My colleague is running into a strange memory error with elastic search on
his mac book. That seems to come and go. We've been seeing this error
pretty much the moment after I migrated to using the lucene 4.x snapshot
builds two months ago.

He's running pretty much an identical setup to what I have on a slightly
older macbook 13" (I have the 15"). Both have 8GB. I have an SSD, he has a
normal laptop drive. Both use the same jvm settings. Both have a very
recent sun jdk 1.7. Both have OSX 10.8.x (I've update to 10.8.3 last week,
he's still on 10.8.2).

He's seeing these errors randomly. I've never seen them. Unlike him, I
actually use the index (see my other post yesterday ;-)). All he does is
run a maven build that launches es, inserts a hand ful of test objects and
then shuts down es. We start es with an in memory index in our tests.

We both use these settings:

export
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_13.jdk/Contents/Home
PATH=$JAVA_HOME/bin:/usr/local/mvn/bin:$JRUBY_HOME/bin:$PATH; export PATH
MAVEN_OPTS='-Xmx2048m -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled
-Djava.awt.headless=true'

We've been seeing this error on and of on his machine for a few months now
and I've been struggling to figure out what the problem is since I can't
reproduce on my own machine. 2 GB is more than enough for running our
tests. When I run the same build it peaks out at a mere 225MB in jvisualvm.
The direct buffer memory appears to be some shared memory buffer.

So, has anyone seen similar errors?

Here's the stacktrace:

INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [localstream] creating index, cause
[auto(index api)], shards [5]/[1], mappings []
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][0] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][1] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][2] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][3] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][4] deleting shard content
INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [localstream] update_mapping [poi]
(dynamic)
INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [posts] update_mapping [post] (dynamic)
WARN [elasticsearch[devnode][refresh][T#3]] (Log4jESLogger.java:119) -
[devnode] [posts][0] failed engine
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
at
org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainByteBufferAllocator.java:55)
at
org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(CachingByteBufferAllocator.java:52)
at
org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.java:101)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffer(ByteBufferIndexOutput.java:106)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBufferIndexOutput.java:93)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:65)
at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:76)
at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:59)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:545)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:146)
at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.close(Lucene42DocValuesConsumer.java:162)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:146)
at org.apache.lucene.index.NormsConsumer.flush(NormsConsumer.java:67)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:54)
at
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
at
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:357)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
at
org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
at
org.elasticsearch.index.engine.robin.RobinEngine.refresh(RobinEngine.java:763)
at
org.elasticsearch.index.shard.service.InternalIndexShard.refresh(InternalIndexShard.java:402)
at
org.elasticsearch.action.admin.indices.refresh.TransportRefreshAction.shardOperation(TransportRefreshAction.java:120)
at
org.elasticsearch.action.admin.indices.refresh.TransportRefreshAction.shardOperation(TransportRefreshAction.java:49)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:265)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:242)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:218)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Suppressed: java.lang.OutOfMemoryError: Direct buffer memory
... 43 more
WARN [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:114) -
[devnode] sending failed shard for [posts][0],
node[fhtPH4u0TmS9dhSjudUCJQ], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Direct buffer memory]]]
WARN [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:114) -
[devnode] received shard failed for [posts][0],
node[fhtPH4u0TmS9dhSjudUCJQ], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Direct buffer memory]]]
Tests run: 142, Failures: 2, Errors: 0, Skipped: 74, Time elapsed: 70.052
sec <<< FAILURE!
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: stopping ...
WARN [Thread-1] (Log4jESLogger.java:119) - An exception was thrown by an
exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:115)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:71)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:55)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.execute(DefaultChannelPipeline.java:632)
at
org.elasticsearch.common.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496)
at
org.elasticsearch.common.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:654)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:777)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:587)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:578)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
at
org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
at
org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:152)
at
org.elasticsearch.rest.action.index.RestIndexAction$1.onFailure(RestIndexAction.java:139)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$3.onClose(TransportShardReplicationOperationAction.java:497)
at
org.elasticsearch.cluster.service.InternalClusterService.doStop(InternalClusterService.java:128)
at
org.elasticsearch.common.component.AbstractLifecycleComponent.stop(AbstractLifecycleComponent.java:105)
at org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:250)
at org.elasticsearch.node.internal.InternalNode.close(InternalNode.java:269)
at com.localstream.es.EsLauncher$1.run(EsLauncher.java:31)
INFO [Thread-2] (AbstractApplicationContext.java:1042) - Closing
org.springframework.context.support.GenericApplicationContext@1e6c8a0c:
startup date [Thu Mar 21 11:04:16 CET 2013]; root of context hierarchy
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: stopped
INFO [Thread-2] (DefaultSingletonBeanRegistry.java:444) - Destroying
singletons in
org.springframework.beans.factory.support.DefaultListableBeanFactory@632270ff:
defining beans
[org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,springConfig,embeddedESLauncherConfig,org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,com.localstream.server.spring.ConfigurationConfig,configuration,esRestClient,parser,placesDAO,esPostDAO,esUserDAO,httpClient];
root of factory hierarchy
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: closing ...
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: closed

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

If the machine has only 2GB of RAM you'll have a problem. Direct buffer
memory lives outside of the heap memory specified with -Xmx. Try decreasing
this 2GB Xmx value.

Regards,
Peter.

On Thursday, March 21, 2013 11:57:24 AM UTC+1, Jilles van Gurp wrote:

My colleague is running into a strange memory error with Elasticsearch on
his mac book. That seems to come and go. We've been seeing this error
pretty much the moment after I migrated to using the lucene 4.x snapshot
builds two months ago.

He's running pretty much an identical setup to what I have on a slightly
older macbook 13" (I have the 15"). Both have 8GB. I have an SSD, he has a
normal laptop drive. Both use the same jvm settings. Both have a very
recent sun jdk 1.7. Both have OSX 10.8.x (I've update to 10.8.3 last week,
he's still on 10.8.2).

He's seeing these errors randomly. I've never seen them. Unlike him, I
actually use the index (see my other post yesterday ;-)). All he does is
run a maven build that launches es, inserts a hand ful of test objects and
then shuts down es. We start es with an in memory index in our tests.

We both use these settings:

export
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_13.jdk/Contents/Home
PATH=$JAVA_HOME/bin:/usr/local/mvn/bin:$JRUBY_HOME/bin:$PATH; export PATH
MAVEN_OPTS='-Xmx2048m -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled
-Djava.awt.headless=true'

We've been seeing this error on and of on his machine for a few months now
and I've been struggling to figure out what the problem is since I can't
reproduce on my own machine. 2 GB is more than enough for running our
tests. When I run the same build it peaks out at a mere 225MB in jvisualvm.
The direct buffer memory appears to be some shared memory buffer.

So, has anyone seen similar errors?

Here's the stacktrace:

INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [localstream] creating index, cause
[auto(index api)], shards [5]/[1], mappings
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][0] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][1] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][2] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][3] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][4] deleting shard content
INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [localstream] update_mapping [poi]
(dynamic)
INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [posts] update_mapping [post] (dynamic)
WARN [elasticsearch[devnode][refresh][T#3]] (Log4jESLogger.java:119) -
[devnode] [posts][0] failed engine
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
at
org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainByteBufferAllocator.java:55)
at
org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(CachingByteBufferAllocator.java:52)
at
org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.java:101)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffer(ByteBufferIndexOutput.java:106)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBufferIndexOutput.java:93)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:65)
at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:76)
at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:59)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:545)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:146)
at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.close(Lucene42DocValuesConsumer.java:162)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:146)
at org.apache.lucene.index.NormsConsumer.flush(NormsConsumer.java:67)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:54)
at
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
at
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
at
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:357)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
at
org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
at
org.elasticsearch.index.engine.robin.RobinEngine.refresh(RobinEngine.java:763)
at
org.elasticsearch.index.shard.service.InternalIndexShard.refresh(InternalIndexShard.java:402)
at
org.elasticsearch.action.admin.indices.refresh.TransportRefreshAction.shardOperation(TransportRefreshAction.java:120)
at
org.elasticsearch.action.admin.indices.refresh.TransportRefreshAction.shardOperation(TransportRefreshAction.java:49)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:265)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:242)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:218)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Suppressed: java.lang.OutOfMemoryError: Direct buffer memory
... 43 more
WARN [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:114) -
[devnode] sending failed shard for [posts][0],
node[fhtPH4u0TmS9dhSjudUCJQ], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Direct buffer memory]]]
WARN [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:114) -
[devnode] received shard failed for [posts][0],
node[fhtPH4u0TmS9dhSjudUCJQ], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Direct buffer memory]]]
Tests run: 142, Failures: 2, Errors: 0, Skipped: 74, Time elapsed: 70.052
sec <<< FAILURE!
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: stopping ...
WARN [Thread-1] (Log4jESLogger.java:119) - An exception was thrown by an
exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:115)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:71)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:55)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.execute(DefaultChannelPipeline.java:632)
at
org.elasticsearch.common.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496)
at
org.elasticsearch.common.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:654)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:777)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:587)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:578)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
at
org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
at
org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:152)
at
org.elasticsearch.rest.action.index.RestIndexAction$1.onFailure(RestIndexAction.java:139)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$3.onClose(TransportShardReplicationOperationAction.java:497)
at
org.elasticsearch.cluster.service.InternalClusterService.doStop(InternalClusterService.java:128)
at
org.elasticsearch.common.component.AbstractLifecycleComponent.stop(AbstractLifecycleComponent.java:105)
at org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:250)
at
org.elasticsearch.node.internal.InternalNode.close(InternalNode.java:269)
at com.localstream.es.EsLauncher$1.run(EsLauncher.java:31)
INFO [Thread-2] (AbstractApplicationContext.java:1042) - Closing
org.springframework.context.support.GenericApplicationContext@1e6c8a0c:
startup date [Thu Mar 21 11:04:16 CET 2013]; root of context hierarchy
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: stopped
INFO [Thread-2] (DefaultSingletonBeanRegistry.java:444) - Destroying
singletons in
org.springframework.beans.factory.support.DefaultListableBeanFactory@632270ff:
defining beans
[org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,springConfig,embeddedESLauncherConfig,org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,com.localstream.server.spring.ConfigurationConfig,configuration,esRestClient,parser,placesDAO,esPostDAO,esUserDAO,httpClient];
root of factory hierarchy
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: closing ...
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: closed

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The machine has 8GB, the process has 2GB heap assigned and there is plenty
of memory available. Could swapping cause this problem though?

Jilles

On Thursday, March 21, 2013 1:10:06 PM UTC+1, Karussell wrote:

If the machine has only 2GB of RAM you'll have a problem. Direct buffer
memory lives outside of the heap memory specified with -Xmx. Try
decreasing this 2GB Xmx value.

Regards,
Peter.

On Thursday, March 21, 2013 11:57:24 AM UTC+1, Jilles van Gurp wrote:

My colleague is running into a strange memory error with Elasticsearch
on his mac book. That seems to come and go. We've been seeing this error
pretty much the moment after I migrated to using the lucene 4.x snapshot
builds two months ago.

He's running pretty much an identical setup to what I have on a slightly
older macbook 13" (I have the 15"). Both have 8GB. I have an SSD, he has a
normal laptop drive. Both use the same jvm settings. Both have a very
recent sun jdk 1.7. Both have OSX 10.8.x (I've update to 10.8.3 last week,
he's still on 10.8.2).

He's seeing these errors randomly. I've never seen them. Unlike him, I
actually use the index (see my other post yesterday ;-)). All he does is
run a maven build that launches es, inserts a hand ful of test objects and
then shuts down es. We start es with an in memory index in our tests.

We both use these settings:

export
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_13.jdk/Contents/Home
PATH=$JAVA_HOME/bin:/usr/local/mvn/bin:$JRUBY_HOME/bin:$PATH; export PATH
MAVEN_OPTS='-Xmx2048m -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled
-Djava.awt.headless=true'

We've been seeing this error on and of on his machine for a few months
now and I've been struggling to figure out what the problem is since I
can't reproduce on my own machine. 2 GB is more than enough for running our
tests. When I run the same build it peaks out at a mere 225MB in jvisualvm.
The direct buffer memory appears to be some shared memory buffer.

So, has anyone seen similar errors?

Here's the stacktrace:

INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [localstream] creating index, cause
[auto(index api)], shards [5]/[1], mappings
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][0] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][1] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][2] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][3] deleting shard content
INFO [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:104) -
[devnode] [localstream][4] deleting shard content
INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [localstream] update_mapping [poi]
(dynamic)
INFO [elasticsearch[devnode][clusterService#updateTask][T#1]]
(Log4jESLogger.java:104) - [devnode] [posts] update_mapping [post] (dynamic)
WARN [elasticsearch[devnode][refresh][T#3]] (Log4jESLogger.java:119) -
[devnode] [posts][0] failed engine
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
at
org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainByteBufferAllocator.java:55)
at
org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(CachingByteBufferAllocator.java:52)
at
org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.java:101)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffer(ByteBufferIndexOutput.java:106)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBufferIndexOutput.java:93)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:65)
at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:76)
at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
at
org.elasticsearch.common.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:59)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:545)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:146)
at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.close(Lucene42DocValuesConsumer.java:162)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:146)
at org.apache.lucene.index.NormsConsumer.flush(NormsConsumer.java:67)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:54)
at
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
at
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
at
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:357)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:245)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:155)
at
org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:204)
at
org.elasticsearch.index.engine.robin.RobinEngine.refresh(RobinEngine.java:763)
at
org.elasticsearch.index.shard.service.InternalIndexShard.refresh(InternalIndexShard.java:402)
at
org.elasticsearch.action.admin.indices.refresh.TransportRefreshAction.shardOperation(TransportRefreshAction.java:120)
at
org.elasticsearch.action.admin.indices.refresh.TransportRefreshAction.shardOperation(TransportRefreshAction.java:49)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:265)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction.performOperation(TransportBroadcastOperationAction.java:242)
at
org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:218)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Suppressed: java.lang.OutOfMemoryError: Direct buffer memory
... 43 more
WARN [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:114) -
[devnode] sending failed shard for [posts][0],
node[fhtPH4u0TmS9dhSjudUCJQ], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Direct buffer memory]]]
WARN [elasticsearch[devnode][generic][T#2]] (Log4jESLogger.java:114) -
[devnode] received shard failed for [posts][0],
node[fhtPH4u0TmS9dhSjudUCJQ], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Direct buffer memory]]]
Tests run: 142, Failures: 2, Errors: 0, Skipped: 74, Time elapsed: 70.052
sec <<< FAILURE!
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: stopping ...
WARN [Thread-1] (Log4jESLogger.java:119) - An exception was thrown by an
exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:115)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:71)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:55)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.execute(DefaultChannelPipeline.java:632)
at
org.elasticsearch.common.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496)
at
org.elasticsearch.common.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:654)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:777)
at
org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at
org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:587)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:578)
at
org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
at
org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
at
org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
at
org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:152)
at
org.elasticsearch.rest.action.index.RestIndexAction$1.onFailure(RestIndexAction.java:139)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$3.onClose(TransportShardReplicationOperationAction.java:497)
at
org.elasticsearch.cluster.service.InternalClusterService.doStop(InternalClusterService.java:128)
at
org.elasticsearch.common.component.AbstractLifecycleComponent.stop(AbstractLifecycleComponent.java:105)
at
org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:250)
at
org.elasticsearch.node.internal.InternalNode.close(InternalNode.java:269)
at com.localstream.es.EsLauncher$1.run(EsLauncher.java:31)
INFO [Thread-2] (AbstractApplicationContext.java:1042) - Closing
org.springframework.context.support.GenericApplicationContext@1e6c8a0c:
startup date [Thu Mar 21 11:04:16 CET 2013]; root of context hierarchy
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: stopped
INFO [Thread-2] (DefaultSingletonBeanRegistry.java:444) - Destroying
singletons in
org.springframework.beans.factory.support.DefaultListableBeanFactory@632270ff:
defining beans
[org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,springConfig,embeddedESLauncherConfig,org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,com.localstream.server.spring.ConfigurationConfig,configuration,esRestClient,parser,placesDAO,esPostDAO,esUserDAO,httpClient];
root of factory hierarchy
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: closing ...
INFO [Thread-1] (Log4jESLogger.java:104) - [devnode]
{0.90.0.RC2-SNAPSHOT}[966]: closed

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I recommend not using "in-memory" = bytebuffer store if you keep seeing
"java.lang.OutOfMemoryError: Direct buffer memory".

The "bytebuffer store" is an experimental store of ES, NIO-based, with
all kinds of rough edges, for instance, memory management, where it
depends on how the JVM addresses direct memory in the VFS of the OS. It
was aimed to replace the Lucene MemoryIndex (which was almost unusable
some years ago).

Why not use file-based "niofs" and "gateway: none" for quick tests? In
case, you can point a niofs index store to a tmp filesystem.

Jörg

Am 21.03.13 11:57, schrieb Jilles van Gurp:

We start es with an in memory index in our tests.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

We have to use in-memory store due to latency constraints.
Any other store types will work but they don't fit the purpose.

Nikolay

On Thursday, March 21, 2013 2:37:34 PM UTC-4, Jörg Prante wrote:

I recommend not using "in-memory" = bytebuffer store if you keep seeing
"java.lang.OutOfMemoryError: Direct buffer memory".

The "bytebuffer store" is an experimental store of ES, NIO-based, with
all kinds of rough edges, for instance, memory management, where it
depends on how the JVM addresses direct memory in the VFS of the OS. It
was aimed to replace the Lucene MemoryIndex (which was almost unusable
some years ago).

Why not use file-based "niofs" and "gateway: none" for quick tests? In
case, you can point a niofs index store to a tmp filesystem.

Jörg

Am 21.03.13 11:57, schrieb Jilles van Gurp:

We start es with an in memory index in our tests.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi, I get same error on my Win8 32bits laptop with 4G RAM and Java
1.7.0_45. I am trying to index a few thousands documents (each one
containing a field with aprox 30kb size) and, after indexing 198 documents
properly, in the 199th always gets the following exception:

nov 21, 2013 3:14:14 PM
org.elasticsearch.netty.channel.socket.nio.AbstractNioSelector
Advertencia: Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Unknown Source)
at java.nio.DirectByteBuffer.(Unknown Source)
at java.nio.ByteBuffer.allocateDirect(Unknown Source)
at
org.elasticsearch.common.netty.channel.socket.nio.SocketReceiveBufferAllocator.newBuffer(SocketReceiveBufferAllocator.java:64)

I started with an out of the box distribution of elasticsearch-0.90.7 (i.e.
1 node, 5 shards) and have been playing combining different settings, for
example:

index.store.type: niofs

set ES_MIN_MEM=1g

set ES_MAX_MEM=1g

set ES_DIRECT_SIZE=1g

but cannot find a successful combination to make the system work, Not
looking for high performance, obviously, but just looking to have an ES
development environment in the laptop.

Any help how to configure a ES dev env in a laptop to import this set of
documents?

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Any clue on this?

What would be the list of settings affecting HW requirements, to use the
most conservative values, and from there, identify a minimum ES
requirements?

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

JVM memory pool monitoring in SPM would help you figure out which pool is
too small (Announcement: JVM Memory Pool Monitoring - Sematext
) although you can use JConsole or jstat for that as well if you just need
some ad-hoc checking for this particular problem... Once you know which
pool is problematic you can change your JVM params.

Otis

Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/

On Thursday, November 21, 2013 7:30:14 PM UTC-5, pelly...@yahoo.es wrote:

Any clue on this?

What would be the list of settings affecting HW requirements, to use the
most conservative values, and from there, identify a minimum ES
requirements?

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi, thanks, used instead the bigdesk plugin as it monitors Heap Memory /
Non Heap Memory / OS Memory / Swap Memory.

I know now a lot of things that does not work:

  • set -Xss512k

  • set -Xms512m

  • set -XmX512m

  • set -XX:MaxPermSize=64m

  • set -XX:PermSize=64m

  • increasing swap to max 6M

  • index.store.type: niofs

  • changed document 199th

But neither heap, nor non heap, nor OS Memory, nor swap memory reach half
capacity as shown by Bigdesk plugin and error still occurs when indexing
document 199th with and index size: 4.5mb (4.5mb).

Well, cannot ask for miracles with a 4G laptop with Win32 but I run out of
ideas. Nevermind, I will check for upgrading the memory but 32bits limits
to 4G.

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Check if you can set up a 64bit OS and a 64bit JVM. This helps for direct
memory addressing.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

[2013-11-22 05:57:27,485][WARN ][http.netty ] [super] Caught
exception while handling client http traffic, closing connection [id:
0x596d687e, /:35644 => /:9200]
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:659)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:113)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:305)
at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:75)
at sun.nio.ch.IOUtil.write(IOUtil.java:87)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:352)
at
org.elasticsearch.common.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:335)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)

hmm.. similar exception I countered today, has 64bit linux os with 64bit
openjdk, but the heap is very small at 128MB, eventually
I increase it to 256MB, that does seem to appear for now.

/Jason

On Fri, Nov 22, 2013 at 4:46 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

Check if you can set up a 64bit OS and a 64bit JVM. This helps for direct
memory addressing.

Jörg

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Yeah, I think is related Jorg, This is is actually a 64bits laptop running
a 32bits Win8 (and therefore limiting memory to >less than< 4G) by
mistake.

Maybe it has something to do. Will try it out as soon as I can to see if
this mismatch is related.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Confirmed Jorg,

after upgrading from Windows 8 32bits to Windows 8.1 64 bits and also Java
to 64bits in a 64bits laptop, files can be indexed.

There is an exception again later:

INFO: [Piledriver] failed to get node info for
[#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[localhost/127.0.0.1:9300]][cluster/nodes/info] request_id [129]
timed out after [5043ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

but probably is too demanding since memory is full and I am running within
Eclipse.

Therefore is not a good idea use a 32bits Windows 8 on a 64 bits laptop as
it will trigger the exception above.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/009d7e23-5a2c-42ce-a889-8a8d95dc152c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Finally, happily surprised, by using a BulkProcessor as said here works
great in a 4G laptop.

https://groups.google.com/forum/?hl=en-GB#!topicsearchin/elasticsearch/org.elasticsearch.transport.ReceiveTimeoutTransportException|sort:date/elasticsearch/yiIPeI29OYw

No memory leaks at all. Advised if anyone runs into the same issues for
bulk indexing.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5f9dc61e-59ef-43ef-9b96-d3e1dcd1fe7f%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.