Capturing all the queries


(Nemo) #1

Hi All,
Is there any way to capture all the incoming quires in Elasticsearch? I tried to play around with log settings in elasticsearch.yml but I am not seeing any quires getting logged. Can some one help me with this?


(Mark Walkom) #2

You need to put some kind of proxy in front of ES to do this, there is no inbuilt functionality.


(Mike Simos) #3

You can edit elasticsearch.yml there is a section called slow log. If you set this very low then you can see the queries being executed:

################################## Slow Log ##################################

# Shard level query and fetch threshold logging.

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

(Nemo) #4

Thank you very much for your reply. I changed info to 0s to get all the query that have been executed. But I am facing two problem.

  1. All aggregations are in binary. Unable to see what exactly the aggregation request is.
  2. Each request is logging five entry, one for each shard (There are 5 shards in my setup)

Is there any workaround for this?

Thanks,


(Mike Simos) #5

Hi,

Off hand I don't know why the aggregation request is in binary. In my experience the JSON query is written in ASCII text. What version of Elasticsearch are you using and what API and protocol (Node, Transport, REST) are you using? Also the slow log will show one query for each shard.


(Nemo) #6

Hi Mike,
I am using ElasticSearch 1.7.1 and I am doing query through java client.

Below is the log that I got for a single query.

[2015-12-02 14:35:36,478][INFO ][index.search.slowlog.query] [node1] [person][2] took[1.6ms], took_millis[1], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"size":0,"query":{"filtered":{"filter":{"bool":{"must":{"range":{"entry_time":{"from":1449009336470,"to":1449095736470,"include_lower":true,"include_upper":true}}}}}}},"_source":{"includes":["name","address","title","entry_time","company"],"excludes":[]},"sort":[{"entry_time":{"order":"desc","unmapped_type":"date"}}],"aggregations_binary":"eyJlbnRyeV90aW1lIjp7ImRhdGVfaGlzdG9ncmFtIjp7ImZpZWxkIjoiZW50cnlfdGltZSIsImludGVydmFsIjoiMWgiLCJtaW5fZG9jX2NvdW50IjowLCJvcmRlciI6eyJfa2V5IjoiYXNjIn0sImV4dGVuZGVkX2JvdW5kcyI6eyJtaW4iOjE0NDkwMDkzMzY0NzAsIm1heCI6MTQ0OTAwOTMzNjQ3MH19LCJhZ2dyZWdhdGlvbnMiOnsiY29sb3IiOnsidGVybXMiOnsic2l6ZSI6MCwic2hhcmRfc2l6ZSI6MCwic2NyaXB0IjoiKGRvY1snbG9jYWxfY29sb3InXS52YWx1ZS5jb21wYXJlVG8oIGRvY1sncmVtb3RlX2NvbG9yJ10udmFsdWUpIDwgMCApID8gKGRvY1snbG9jYWxfY29sb3InXS52YWx1ZSArICAnOicgICsgZG9jWydyZW1vdGVfY29sb3InXS52YWx1ZSApIDogKGRvY1sncmVtb3RlX2NvbG9yJ10udmFsdWUgKyAgJzonICArIGRvY1snbG9jYWxfY29sb3InXS52YWx1ZSApIiwib3JkZXIiOnsibG9zc19wZXJjZW50YWdlIjoiZGVzYyJ9fSwiYWdncmVnYXRpb25zIjp7ImppdHRlciI6eyJhdmciOnsiZmllbGQiOiJqaXR0ZXIifX0sImxvc3NfcGVyY2VudGFnZSI6eyJhdmciOnsiZmllabciOiJsb3NzX3BlcmNlbnRhZ2UifX0sImxhdGVuY3kiOnsiYXZnIjp7ImmnoWxkIjoibGF0ZW5jeSJ9fX19fX19"}], extra_source[], 
.
.
.
[2015-12-02 14:35:36,480][INFO ][index.search.slowlog.query] [node1] [person][4] took[4.7ms], took_millis[4], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"size":0,"query":{"filtered":{"filter":{"bool":{"must":{"range":{"entry_time":{"from":1449009336470,"to":1449095736470,"include_lower":true,"include_upper":true}}}}}}},"_source":{"includes":["name","address","title","entry_time","company"],"excludes":[]},"sort":[{"entry_time":{"order":"desc","unmapped_type":"date"}}],"aggregations_binary":"eyJlbnRyeV90aW1lIjp7ImRhdGVfaGlzdG9ncmFtIjp7ImZpZWxkIjoiZW50cnlfdGltZSIsImludGVydmFsIjoiMWgiLCJtaW5fZG9jX2NvdW50IjowLCJvcmRlciI6eyJfa2V5IjoiYXNjIn0sImV4dGVuZGVkX2JvdW5kcyI6eyJtaW4iOjE0NDkwMDkzMzY0NzAsIm1heCI6MTQ0OTAwOTMzNjQ3MH19LCJhZ2dyZWdhdGlvbnMiOnsiY29sb3IiOnsidGVybXMiOnsic2l6ZSI6MCwic2hhcmRfc2l6ZSI6MCwic2NyaXB0IjoiKGRvY1snbG9jYWxfY29sb3InXS52YWx1ZS5jb21wYXJlVG8oIGRvY1sncmVtb3RlX2NvbG9yJ10udmFsdWUpIDwgMCApID8gKGRvY1snbG9jYWxfY29sb3InXS52YWx1ZSArICAnOicgICsgZG9jWydyZW1vdGVfY29sb3InXS52YWx1ZSApIDogKGRvY1sncmVtb3RlX2NvbG9yJ10udmFsdWUgKyAgJzonICArIGRvY1snbG9jYWxfY29sb3InXS52YWx1ZSApIiwib3JkZXIiOnsibG9zc19wZXJjZW50YWdlIjoiZGVzYyJ9fSwiYWdncmVnYXRpb25zIjp7ImppdHRlciI6eyJhdmciOnsiZmllbGQiOiJqaXR0ZXIifX0sImxvc3NfcGVyY2VudGFnZSI6eyJhdmciOnsiZmllabciOiJsb3NzX3BlcmNlbnRhZ2UifX0sImxhdGVuY3kiOnsiYXZnIjp7ImmnoWxkIjoibGF0ZW5jeSJ9fX19fX19"}], extra_source[], 

So I am sure that there are no other queries are made and also you can see aggregation is in binary. And also I noticed that not always queries are logging for each shard. The count changes for each subsequent requests.

Please let me know if you need more information.

Thanks,


(Nemo) #7

Adding to the above, when i set query through setExtraSource(), I am able to see aggregation as plain text but when I sent query and aggregation through setAggregations and setQuery, I am seeing above problem ie. Aggregation in binary.


(system) #8