Sporadic high search.queue

We're running a two storage node (64 GB with 32GB for ES), two query node (32GB with 16GB for ES) Elasticsearch 1.4.7 setup. We have monitoring for the various counters available in the _cat API. Sometimes we notice an increase in the search.queue value for one of the storage nodes. At the same time, the server itself doesn't seem to be particularly busy with anything (according to atop logs), and no impact can be measured for end users.

Is there a way do determine what kind of queries are queued? I'm trying to determine if I need to optimize something, or if the counter is not worth monitoring.

I'd be more concerned with any rejections that queue size.

What I've done in the past is to log queries that take a long time in the application that is making the queries. You don't know what is causing them to take a long time, but queueing can be part of it. If those logs don't trip and you don't see any rejections its pretty safe to ignore the queue depth.

In my specific case, I can't be sure what the source is of the queries. There are multiple applications, some which are not directly editable.

I assume that search queue rejections are logged in the same way that bulk queue rejections are logged?

Good luck. That is a hard place to be.

I expect so myself but I don't remember testing it. I'd test it locally just to be sure.