Extremely slow indexing -- java throwing http excetion errors

Hello all,

So here's the issue, our cluster was previously very underwhelmed as far as
resource consumption, and after some config changes (see complete config
below) -- we were able to hike up resource consumption, but are still
indexing documents at the same sluggish rate of < 400 docs/second.

Redis and Logstash are definitely not the bottlenecks, and the indexing
seems to be growing exponentially worse as we pull in more data. We are
using elasticsearch v 1.1.1.

The java http exception errors would definitely explain the slugishness, as
there seems to be a socket timeout every second, like clockwork -- but i'm
at a loss for what could be causing the errors to begin with.

We are running redis,logstash kibana and the es master (no data) on one
node, and have our elasticsearch data instance on another node. Network
latency is definitely not so atrocious that it would be an outright
bottleneck, and data gets to the secondary node fast enough -- but is
backed up in indexing.

Any help would greatly be appreciated, and I thank you all in advance!

############### ES CONFIG ###############

index.indexing.slowlog.threshold.index.warn: 10s
index.indexing.slowlog.threshold.index.info: 5s
index.indexing.slowlog.threshold.index.debug: 2s
index.indexing.slowlog.threshold.index.trace: 500ms

monitor.jvm.gc.young.warn: 1000ms
monitor.jvm.gc.young.info: 700ms
#monitor.jvm.gc.young.debug: 400ms

monitor.jvm.gc.old.warn: 10s
monitor.jvm.gc.old.info: 5s
#monitor.jvm.gc.old.debug: 2s
cluster.name: iislog-cluster
node.name: "VM-ELKIIS"
discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.unicast.hosts: ["192.168.6.145"]
discovery.zen.ping.timeout: 5
node.master: true
node.data: false
index.number_of_shards: 10
index.number_of_replicas: 0
bootstrap.mlockall: true
index.refresh_interval: 30
indices.memory.index_buffer_size: 50%
index.translog.flush_threshold_ops: 50000
index.store.type: mmapfs
index.store.compress.stored: true

threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100

threadpool.index.type: fixed
threadpool.index.size: 20
threadpool.index.queue_size: 100

######################## JAVA ERRORS IN ES LOG ###########################

[2014-06-18 09:39:09,565][DEBUG][http.netty ] [VM-ELKIIS]
Caught exception while handling client http traffic, closing connection
[id: 0x7561184c, /192.168.6.3:6206 => /192.168.6.21:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/95e3bc66-b403-4844-a798-da0f25141ca6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hey.

judging from the exception this looks like an unstable network connection?
Are you using persistent HTTP connections? Pinging the nodes by each other
is not a problem I guess?

--Alex

On Thu, Jun 19, 2014 at 12:12 AM, alekjouharyan@gmail.com wrote:

Hello all,

So here's the issue, our cluster was previously very underwhelmed as far
as resource consumption, and after some config changes (see complete config
below) -- we were able to hike up resource consumption, but are still
indexing documents at the same sluggish rate of < 400 docs/second.

Redis and Logstash are definitely not the bottlenecks, and the indexing
seems to be growing exponentially worse as we pull in more data. We are
using elasticsearch v 1.1.1.

The java http exception errors would definitely explain the slugishness,
as there seems to be a socket timeout every second, like clockwork -- but
i'm at a loss for what could be causing the errors to begin with.

We are running redis,logstash kibana and the es master (no data) on one
node, and have our elasticsearch data instance on another node. Network
latency is definitely not so atrocious that it would be an outright
bottleneck, and data gets to the secondary node fast enough -- but is
backed up in indexing.

Any help would greatly be appreciated, and I thank you all in advance!

############### ES CONFIG ###############

index.indexing.slowlog.threshold.index.warn: 10s
index.indexing.slowlog.threshold.index.info: 5s
index.indexing.slowlog.threshold.index.debug: 2s
index.indexing.slowlog.threshold.index.trace: 500ms

monitor.jvm.gc.young.warn: 1000ms
monitor.jvm.gc.young.info: 700ms
#monitor.jvm.gc.young.debug: 400ms

monitor.jvm.gc.old.warn: 10s
monitor.jvm.gc.old.info: 5s
#monitor.jvm.gc.old.debug: 2s
cluster.name: iislog-cluster
node.name: "VM-ELKIIS"
discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.unicast.hosts: ["192.168.6.145"]
discovery.zen.ping.timeout: 5
node.master: true
node.data: false
index.number_of_shards: 10
index.number_of_replicas: 0
bootstrap.mlockall: true
index.refresh_interval: 30
indices.memory.index_buffer_size: 50%
index.translog.flush_threshold_ops: 50000
index.store.type: mmapfs
index.store.compress.stored: true

threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100

threadpool.index.type: fixed
threadpool.index.size: 20
threadpool.index.queue_size: 100

######################## JAVA ERRORS IN ES LOG ###########################

[2014-06-18 09:39:09,565][DEBUG][http.netty ] [VM-ELKIIS]
Caught exception while handling client http traffic, closing connection
[id: 0x7561184c, /192.168.6.3:6206 => /192.168.6.21:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/95e3bc66-b403-4844-a798-da0f25141ca6%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/95e3bc66-b403-4844-a798-da0f25141ca6%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM-jK5P8DQWxVPzvcvOsFViFziGwSTnXSbYp689M5wLmMg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.