Deleting jdbc river after bulk requests results in "Caught exception while handling client http traffic, closing connection" messages

Hello!

 I am using the JDBC river plugin (latest version with the name 

"elasticsearch-river-jdbc-2.2.1.jar" on ES 0.90.5) over some very large
views and, so, I wait for the bulk requests to finish, count the total
number of indexed documents to see if it is alright, and delete the river.
Everything has been working fine for some months now.

I am using a cluster with two nodes sharing the same configuration and, 

recently, I started noticing that when I delete the river I receive an
enormous amount of error messages (every one to five ms for a few minutes)
stating that it "Caught exception while handling client http traffic,
closing connection".

 The success message for the deletion of the river appears in the slave 

node (Goblyn), and the error ones in the master (Corsi, Tom). The error
messages appear to be related to the bulk requests that have already
finished, Is some clean up process still going on?

  It seems clear that the slave node was still connected to the master 

(9200) on all these ports (50508, 50501, etc.) while it requested the
deletion of the river and that, after this deletion, those connections were
forcefully closed. What should I be doing here? How should I check with ES
if it is ok to delete a river? Even though the river is "oneshot" I am
deleting the river because it would always duplicate my documents when
starting ES.

 Thanks for your help!

        André Morais

P.S.: ElasticSearch rocks! And I am a huge fan of the JDBC river.


 Here are the messages:

[2014-04-09 03:19:48,378][INFO
][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] bulk
[7315] success [100 items] [557ms]
[2014-04-09 03:19:48,383][INFO
][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] bulk
[7314] success [100 items] [578ms]
[2014-04-09 03:19:48,467][INFO
][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] bulk
[7318] success [98 items] [244ms]
[2014-04-09 03:19:51,040][INFO ][river.jdbc ] [Goblyn]
[jdbc][clfb20140409031432_river] closing JDBC river
[clfb20140409031432_river/oneshot]
[2014-04-09 03:19:53,973][WARN ][http.netty ] [Corsi, Tom]
Caught exception while handling client http traffic, closing connection
[id: 0x02846283, /0:0:0:0:0:0:0:1:50508 => /0:0:0:0:0:0:0:1:9200]
java.io.IOException: Uma ligação existente foi forçada a fechar pelo
anfitrião remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:193)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
[2014-04-09 03:19:53,973][WARN ][http.netty ] [Corsi, Tom]
Caught exception while handling client http traffic, closing connection
[id: 0x07e53b45, /0:0:0:0:0:0:0:1:50542 => /0:0:0:0:0:0:0:1:9200]
java.io.IOException: Uma ligação existente foi forçada a fechar pelo
anfitrião remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:193)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
[2014-04-09 03:19:53,973][WARN ][http.netty ] [Corsi, Tom]
Caught exception while handling client http traffic, closing connection
[id: 0x8651f24b, /0:0:0:0:0:0:0:1:50501 => /0:0:0:0:0:0:0:1:9200]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/62698fb5-b83b-43f5-8e98-8d2854238309%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I assume this error is triggered because your http client closed the
connection before reading the response fully. It is not related to JDBC
river.

Jörg

On Wed, Apr 9, 2014 at 11:48 AM, André Morais anoeee@gmail.com wrote:

Hello!

 I am using the JDBC river plugin (latest version with the name

"elasticsearch-river-jdbc-2.2.1.jar" on ES 0.90.5) over some very large
views and, so, I wait for the bulk requests to finish, count the total
number of indexed documents to see if it is alright, and delete the river.
Everything has been working fine for some months now.

I am using a cluster with two nodes sharing the same configuration

and, recently, I started noticing that when I delete the river I receive an
enormous amount of error messages (every one to five ms for a few minutes)
stating that it "Caught exception while handling client http traffic,
closing connection".

 The success message for the deletion of the river appears in the

slave node (Goblyn), and the error ones in the master (Corsi, Tom). The
error messages appear to be related to the bulk requests that have already
finished, Is some clean up process still going on?

  It seems clear that the slave node was still connected to the master

(9200) on all these ports (50508, 50501, etc.) while it requested the
deletion of the river and that, after this deletion, those connections were
forcefully closed. What should I be doing here? How should I check with ES
if it is ok to delete a river? Even though the river is "oneshot" I am
deleting the river because it would always duplicate my documents when
starting ES.

 Thanks for your help!

        André Morais

P.S.: Elasticsearch rocks! And I am a huge fan of the JDBC river.


 Here are the messages:

[2014-04-09 03:19:48,378][INFO
][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] bulk
[7315] success [100 items] [557ms]
[2014-04-09 03:19:48,383][INFO
][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] bulk
[7314] success [100 items] [578ms]
[2014-04-09 03:19:48,467][INFO
][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth] bulk
[7318] success [98 items] [244ms]
[2014-04-09 03:19:51,040][INFO ][river.jdbc ] [Goblyn]
[jdbc][clfb20140409031432_river] closing JDBC river
[clfb20140409031432_river/oneshot]
[2014-04-09 03:19:53,973][WARN ][http.netty ] [Corsi, Tom]
Caught exception while handling client http traffic, closing connection
[id: 0x02846283, /0:0:0:0:0:0:0:1:50508 => /0:0:0:0:0:0:0:1:9200]
java.io.IOException: Uma ligação existente foi forçada a fechar pelo
anfitrião remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:193)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
[2014-04-09 03:19:53,973][WARN ][http.netty ] [Corsi, Tom]
Caught exception while handling client http traffic, closing connection
[id: 0x07e53b45, /0:0:0:0:0:0:0:1:50542 => /0:0:0:0:0:0:0:1:9200]
java.io.IOException: Uma ligação existente foi forçada a fechar pelo
anfitrião remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225)
at sun.nio.ch.IOUtil.read(IOUtil.java:193)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
[2014-04-09 03:19:53,973][WARN ][http.netty ] [Corsi, Tom]
Caught exception while handling client http traffic, closing connection
[id: 0x8651f24b, /0:0:0:0:0:0:0:1:50501 => /0:0:0:0:0:0:0:1:9200]

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/62698fb5-b83b-43f5-8e98-8d2854238309%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/62698fb5-b83b-43f5-8e98-8d2854238309%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHDArnG3PN4VH5Ey%3Db%3DiFm4XwoWeTYNspDjg3HmnOy%2Bcw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.