Unable to query specific number for specific field, so confused

I'm querying from Kibana. I'm looking at nginx access logs processed by logstash and stored in elasticsearch. The response code of the log is in the "status" field in my implementation.

I get exactly what I'm looking for with the following queries from the 'discover' page:
status:200
status:[200 TO 499]
status:[501 TO 99999999] //so no results found, not surprising, but it executed the query successfully

I get "Discover: An error occurred with your request. Reset your inputs and try again." with the following queries:
status:500
status:[499 TO 501]
status:* NOT status:200 NOT status:204 NOT status:301 NOT status:304 NOT status:404

That last one is my favorite, as I can remove or change any of the HTTP response code above and yield expected results. I know I'm generating 500s; and even if I weren't I'd still expect a successful query with 'no results' returned. I have a dev environment very, very similar to prod in which searching for 500s works just fine. My production environment does see a good amount of traffic, ~4 million or more requests per hour. My search is limited to the past 15 minutes, my implementation creates a new index for each day. If it were a scaling issue I would imagine that searching for other response codes would also fail, especially with that chained NOT query.

I would greatly appreciate any insight anyone is able to offer!

That's pretty odd. Could you open your browser's developer tools to the network tab when doing one of the bad queries, and copy the request/response with Elasticsearch and paste it here?

1 Like

message: "Request to Elasticsearch failed: {"error":"SearchPhaseExecutionException[Failed to execute phase [query_fetch], all shards failed; shardFailures {[bVAqwIYNQ9qxehOasncFiQ][logstash-2016.03.11][0]: FetchPhaseExecutionException[[logstash-2016.03.11][0]: query[filtered(status:[500 TO 500])->BooleanFilter(+cache(@timestamp:[1457663700000 TO 1457664000000]))],from[0],size[500],sort[<custom:"@timestamp": org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@129e7841>!]: Fetch Failed [Failed to highlight field [@message.raw]]]; nested: RuntimeException[org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 33013]; nested: MaxBytesLengthExceededException[bytes can be at most 32766 in length; got 33013]; }]"

I think I needed to be told to look at the errors when asking about searching for my errors (xhibit, get in here). Message field is too long. We dissect it into more meaningful fields anyway in logstash, I'm guessing I'll be able to set the @message to an empty string after groking out what I want and it'll hum along just fine.

Apparently-

addressed this and it has been marked as resolved as of Nov 2015.

Hmm if that is indeed the same issue you might try setting doc_table:highlight to false in Settings -> Advanced. https://github.com/elastic/kibana/pull/5197

Based on the comments on the issue you linked it seems like the assumption is that elasticsearch fixed the issue though, so it might be worth adding a comment there with the version of ES you're using.

Was using 1.5.2 (not sure how to edit original post), I'm think that feature didn't exist until the release after resolution of the issue I linked as it is not listed in that menu for me.

That looks like an elasticsearch version number. What version of Kibana are you using? I'm guessing 4.1 or below since you're not on ES 2.x. At this point 4.1 and below is only receiving security patches, so upgrading might be your only solution other than modifying the data you're ingesting as you already mentioned.

You're absolutely right, though I thought you had asked me 'so it might be worth adding a comment there with the version of ES you're using' for the elasticsearch version number so I provided it (while talking about kibana features, that's not confusing right? My bad.). Upgraded Kibana to 4.1 as well, everything is working fine now. Thank you for your help.

Great, glad to help! Sorry for miscommunication.