Elasticsearch issues on Maven 3.3.3 docker image

I am using embedded ElasticSearch server to publish some statistics and query them. When I am running this locally everything works fine. Then I tried with BitBucket pipelines using docker image maven 3.3.3 and I get following traces. Please help.

2016-09-07T09:45:58,566 [elasticsearch[Chance][generic][T#4]] [] [] DEBUG transport [Chance] failed to connect to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}], removed from nodes list
org.elasticsearch.transport.ConnectTransportException: [][localhost/127.0.0.1:9300] connect_timeout[30s]
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:952) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:916) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:888) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:267) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:354) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:300) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.client.transport.TransportClientNodesService$ScheduledNodeSampler.run(TransportClientNodesService.java:333) ~[elasticsearch-2.3.3.jar:2.3.3]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9300
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_91]
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:1.8.0_91]
    at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) ~[netty-3.10.5.Final.jar:?]
    at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) ~[netty-3.10.5.Final.jar:?]
    at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) ~[netty-3.10.5.Final.jar:?]
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[netty-3.10.5.Final.jar:?]
    at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) ~[netty-3.10.5.Final.jar:?]
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.10.5.Final.jar:?]
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.10.5.Final.jar:?]
    ... 3 more

2016-09-07T09:46:03,418 [elasticsearch[EmbeddedESNode][clusterService#updateTask][T#1]] [] []  WARN gateway [EmbeddedESNode] [_global]: failed to write global state
java.lang.AssertionError: On Linux and MacOSX fsyncing a directory should not throw IOException, we just don't want to rely on that in production (undocumented). Got: java.io.IOException: Invalid argument
    at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:396) ~[lucene-core-5.5.0.jar:5.5.0 2a228b3920a07f930f7afb6a42d0d20e184a943c - mike - 2016-02-16 15:18:34]
    at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:133) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.gateway.MetaStateService.writeGlobalState(MetaStateService.java:149) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.gateway.GatewayMetaState.clusterChanged(GatewayMetaState.java:148) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.gateway.Gateway.clusterChanged(Gateway.java:185) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231) ~[elasticsearch-2.3.3.jar:2.3.3]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194) ~[elasticsearch-2.3.3.jar:2.3.3]...

Hmm, ES/Lucene is trying to fsync a directory here, to ensure the global state changes are durably written, but is unexpectedly hitting an IOException in your JVM/IO device, which is not good.

Which JVM are you running? What IO system are you using?

Separately, it looks like you are running with assertions enabled; if you disable them, do things work?

1 Like

Thanks for the reply @mikemccand. I figured out the issue. It is similar to what you have suggested. I was running ES on a vboxsf mount type (through virtual box shared directory) which doesn't support fsync. It was the issue. Cheers!