Problems searching ES from other machine

Hey Guys,

today we encoutered a problem while requesting some data from our ES db
from an other server in our network.

All that the other server does is executing the following request:

curl -XGET 'https://elasticsearch-server:443/logstash-2014.09.10,logstash-2014.09.09/_search?pretty' -d '{
"query": {
"filtered": {
"query": {
"bool": {
"should": [
{
"query_string": {
"query": "field:*"
}
}
]
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"from": 1410261816133,
"to": 1410348216133
}
}
},
{
"fquery": {
"query": {
"query_string": {
"query": "logsource:("servername")"
}
},
"_cache": true
}
}
]
}
}
}
},
"highlight": {
"fields": {},
"fragment_size": 2147483647,
"pre_tags": [
"@start-highlight@"
],
"post_tags": [
"@end-highlight@"
]
},
"size": 100,
"sort": [
{
"@timestamp": {
"order": "desc",
"ignore_unmapped": true
}
},
{
"@timestamp": {
"order": "desc",
"ignore_unmapped": true
}
}
]
}'

which simply count how much events 1 server over 24 hours got.

But if this request lead to some abnormal behavior of elasticsearch, we much of the following error messages in our es-log:

[2014-09-10 13:12:32,938][DEBUG][http.netty ] [NodeName] Caught exception while handling client http traffic, closing connection [id: 0x5fd6fd9f, /:40784 :> /<IP of the ES server:9200]
java.nio.channels.ClosedChannelException
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
at org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:173)
at org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
at org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.innerFinishHim(TransportSearchQueryThenFetchAction.java:157)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.finishHim(TransportSearchQueryThenFetchAction.java:139)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.moveToSecondPhase(TransportSearchQueryThenFetchAction.java:90)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.innerMoveToSecondPhase(TransportSearchTypeAction.java:404)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:198)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:174)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:171)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:526)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745

As soon as we active this request from the remote server we get a whole lot of this error messages and everything in elasticsearch slows down pretty hard (Even Kibana request get stuck in the queue).

Anybody got an idea why this is happening?

Any feedback is appreciated.

Thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/85627251-b4be-4031-870a-2bd621d0973c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

The message tells that on port 9200, the HTTP message could not be
understood. Do you send non-HTTP traffic to port 9200?

Jörg

On Wed, Sep 10, 2014 at 1:34 PM, Andrew Lakes alakes501@gmail.com wrote:

Hey Guys,

today we encoutered a problem while requesting some data from our ES db
from an other server in our network.

All that the other server does is executing the following request:

curl -XGET 'https://elasticsearch-server:443/logstash-2014.09.10,logstash-2014.09.09/_search?pretty' -d '{
"query": {
"filtered": {
"query": {
"bool": {
"should": [
{
"query_string": {
"query": "field:*"
}
}
]
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"from": 1410261816133,
"to": 1410348216133
}
}
},
{
"fquery": {
"query": {
"query_string": {
"query": "logsource:("servername")"
}
},
"_cache": true
}
}
]
}
}
}
},
"highlight": {
"fields": {},
"fragment_size": 2147483647,
"pre_tags": [
"@start-highlight@"
],
"post_tags": [
"@end-highlight@"
]
},
"size": 100,
"sort": [
{
"@timestamp": {
"order": "desc",
"ignore_unmapped": true
}
},
{
"@timestamp": {
"order": "desc",
"ignore_unmapped": true
}
}
]
}'

which simply count how much events 1 server over 24 hours got.

But if this request lead to some abnormal behavior of elasticsearch, we much of the following error messages in our es-log:

[2014-09-10 13:12:32,938][DEBUG][http.netty ] [NodeName] Caught exception while handling client http traffic, closing connection [id: 0x5fd6fd9f, /:40784 :> /<IP of the ES server:9200]
java.nio.channels.ClosedChannelException
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
at org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:173)
at org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
at org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.innerFinishHim(TransportSearchQueryThenFetchAction.java:157)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.finishHim(TransportSearchQueryThenFetchAction.java:139)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.moveToSecondPhase(TransportSearchQueryThenFetchAction.java:90)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.innerMoveToSecondPhase(TransportSearchTypeAction.java:404)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:198)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:174)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:171)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:526)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745

As soon as we active this request from the remote server we get a whole lot of this error messages and everything in elasticsearch slows down pretty hard (Even Kibana request get stuck in the queue).

Anybody got an idea why this is happening?

Any feedback is appreciated.

Thanks

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/85627251-b4be-4031-870a-2bd621d0973c%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/85627251-b4be-4031-870a-2bd621d0973c%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGLQUsiTq6WktoLDUTQ0_WcU1ys_uBEy21g6grmkAcPhg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

thx for response.

The remote server just do this request:

curl -XGET -s
'http://elasticsearch-server:9200/logstash-2014.09.10,logstash-2014.09.09/_search?pretty'
-d '{
"facets": {
"terms": {
"terms": {
"field": "_type",
"size": 10,
"order": "count",
"exclude":
},
"facet_filter": {
"fquery": {
"query": {
"filtered": {
"query": {
"bool": {
"should": [
{
"query_string": {
"query": "field:*"
}
}
]
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"from": "1410271471958",
"to": "now"
}
}
},
{
"fquery": {
"query": {
"query_string": {
"query": "logsource:("Servername")"
}
},
"_cache": true
}
}
]
}
}
}
}
}
}
}
},
"size": 0
}'

and this curl-request looks like an http request - doesnt it?

Thanks.

Am Mittwoch, 10. September 2014 14:04:10 UTC+2 schrieb Jörg Prante:

The message tells that on port 9200, the HTTP message could not be
understood. Do you send non-HTTP traffic to port 9200?

Jörg

On Wed, Sep 10, 2014 at 1:34 PM, Andrew Lakes <alak...@gmail.com
<javascript:>> wrote:

Hey Guys,

today we encoutered a problem while requesting some data from our ES db
from an other server in our network.

All that the other server does is executing the following request:

curl -XGET 'https://elasticsearch-server:443/logstash-2014.09.10,logstash-2014.09.09/_search?pretty' -d '{
"query": {
"filtered": {
"query": {
"bool": {
"should": [
{
"query_string": {
"query": "field:*"
}
}
]
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"from": 1410261816133,
"to": 1410348216133
}
}
},
{
"fquery": {
"query": {
"query_string": {
"query": "logsource:("servername")"
}
},
"_cache": true
}
}
]
}
}
}
},
"highlight": {
"fields": {},
"fragment_size": 2147483647,
"pre_tags": [
"@start-highlight@"
],
"post_tags": [
"@end-highlight@"
]
},
"size": 100,
"sort": [
{
"@timestamp": {
"order": "desc",
"ignore_unmapped": true
}
},
{
"@timestamp": {
"order": "desc",
"ignore_unmapped": true
}
}
]
}'

which simply count how much events 1 server over 24 hours got.

But if this request lead to some abnormal behavior of elasticsearch, we much of the following error messages in our es-log:

[2014-09-10 13:12:32,938][DEBUG][http.netty ] [NodeName] Caught exception while handling client http traffic, closing connection [id: 0x5fd6fd9f, /:40784 :> /<IP of the ES server:9200]
java.nio.channels.ClosedChannelException
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
at org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:173)
at org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
at org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.innerFinishHim(TransportSearchQueryThenFetchAction.java:157)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.finishHim(TransportSearchQueryThenFetchAction.java:139)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.moveToSecondPhase(TransportSearchQueryThenFetchAction.java:90)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.innerMoveToSecondPhase(TransportSearchTypeAction.java:404)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:198)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:174)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onResult(TransportSearchTypeAction.java:171)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:526)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745

As soon as we active this request from the remote server we get a whole lot of this error messages and everything in elasticsearch slows down pretty hard (Even Kibana request get stuck in the queue).

Anybody got an idea why this is happening?

Any feedback is appreciated.

Thanks

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/85627251-b4be-4031-870a-2bd621d0973c%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/85627251-b4be-4031-870a-2bd621d0973c%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/62142975-1fb3-4294-bdf9-c09585dd7067%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.