yesterday the es didn't serve the query request.every request is time out.after restart the cluster it recovered.every request took much time to excute.And,the request num become too large,then the cpu usage very high.
log is :
[2012-06-05 19:50:12,867][DEBUG][http.netty ] [Brynocki] Caught exception while handling client http traffic, closing connection [id: 0x48c71e8c, /10.61.4.1:33808 => /10.61.14
.1:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:20)
at sun.nio.ch.IOUtil.read(IOUtil.java:247)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:372)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:246)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
i guess es excute much time to complete a request.bebore it the client
close the connection then retry.
the current request number is too large,then out of resource.
but the requests are all the same pattern,after restart it,it is ok.
在 2012年6月6日星期三UTC+8下午12时49分51秒,shinezhou写道:
yesterday the es didn't serve the query request.every request is time
out.after restart the cluster it recovered.every request took much time to
excute.And,the request num become too large,then the cpu usage very high.
log is :
[2012-06-05 19:50:12,867][DEBUG][http.netty ] [Brynocki]
Caught exception while handling client http traffic, closing connection
[id:
0x48c71e8c, /10.61.4.1:33808 => /10.61.14
.1:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:20)
at sun.nio.ch.IOUtil.read(IOUtil.java:247)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:372)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:246)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/is-this-a-bug-tp4018865.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.
during that time ,the whole cluster can not serve the query request。but can
serve the index request。so confuse。
在 2012年6月6日星期三UTC+8下午2时54分10秒,shinezhou写道:
i guess es excute much time to complete a request.bebore it the client
close the connection then retry.
the current request number is too large,then out of resource.but the requests are all the same pattern,after restart it,it is ok.
在 2012年6月6日星期三UTC+8下午12时49分51秒,shinezhou写道:
yesterday the es didn't serve the query request.every request is time
out.after restart the cluster it recovered.every request took much time
to
excute.And,the request num become too large,then the cpu usage very high.
log is :
[2012-06-05 19:50:12,867][DEBUG][http.netty ] [Brynocki]
Caught exception while handling client http traffic, closing connection
[id:
0x48c71e8c, /10.61.4.1:33808 => /10.61.14
.1:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:20)
at sun.nio.ch.IOUtil.read(IOUtil.java:247)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:66)at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:372)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:246)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/is-this-a-bug-tp4018865.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.
Can you share the search request you executed?
On Wed, Jun 6, 2012 at 9:16 AM, shinezhou zhouxiang@baixing.com wrote:
during that time ,the whole cluster can not serve the query request。but
can serve the index request。so confuse。在 2012年6月6日星期三UTC+8下午2时54分10秒,shinezhou写道:
i guess es excute much time to complete a request.bebore it the client
close the connection then retry.
the current request number is too large,then out of resource.but the requests are all the same pattern,after restart it,it is ok.
在 2012年6月6日星期三UTC+8下午12时49分51秒,**shinezhou写道:
yesterday the es didn't serve the query request.every request is time
out.after restart the cluster it recovered.every request took much time
to
excute.And,the request num become too large,then the cpu usage very
high.
log is :
[2012-06-05 19:50:12,867][DEBUG][http.**netty ]
[Brynocki]
Caught exception while handling client http traffic, closing connection
[id:
0x48c71e8c, /10.61.4.1:33808 => /10.61.14
.1:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.SocketDispatcher.**read0(Native Method)
at sun.nio.ch.SocketDispatcher.**read(SocketDispatcher.java:20)
at sun.nio.ch.IOUtil.read(IOUtil.**java:247)
at sun.nio.ch.SocketChannelImpl.**read(SocketChannelImpl.java:**243)at
org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:**66)
at
org.elasticsearch.common.**netty.channel.socket.nio.**AbstractNioWorker.
**processSelectedKeys(**AbstractNioWorker.java:372)
at
org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(**AbstractNioWorker.java:246)
at
org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:**38)
at
org.elasticsearch.common.**netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:**102)
at
org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(**DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor$Worker.
runTask(ThreadPoolExecutor.**java:886)
at
java.util.concurrent.**ThreadPoolExecutor$Worker.run(**ThreadPoolExecutor.java:908)at java.lang.Thread.run(Thread.**java:662)
--
View this message in context: http://elasticsearch-users.**
115913.n3.nabble.com/is-this-**a-bug-tp4018865.htmlhttp://elasticsearch-users.115913.n3.nabble.com/is-this-a-bug-tp4018865.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.
query all just like this
{"query":{"bool":{"must":[{"term":{"idType":"mobile"}},{"term":{"idValue":"18601772452"}},{"term":{"status":"1"}},{"terms":{"bangui":["违法违规","虚假欺诈","临时禁发","电话号码盗用"],"minimum_match":1}},{"range":{"createdTime":{"from":"20120214T105915Z","to":"20120613T105915Z"}}}]}},"fields":["id"],"sort":{"id":{"order":"asc"}},"size":2000}
these days the es is very well.