My goal is to capture and store all queries run against Elasticsearch in Elasticsearch. I am mimicking the scenario discussed at monitoring-the-search-queries. I am using v7.9.2 of Elasticsearch, Logstash, Kibana and Packetbeat software. The issue I'm having right now, is that I'm not capturing responsetime. I do not see responsetime field in the new Elasticsearch index that is created. I am guessing/suspecting, that its either because (1) Packetbeat is not configured correctly and pulling that information or (2) Logstash
configuration file needs to be corrected. I'm posting my packetbeat.yml and logstash.conf file below, in th hopes that someone can point me in the right direction.
Logstash - sniff_search.conf
input {
beats { port => 5044 }
}
filter {
if "search" in [request]{
grok { match => { "request" => ".*\n\{(?<query_body>.*)"} }
grok { match => { "path" => "\/(?<index>.*)\/_search"} }
if [index] { }
else { mutate { add_field => { "index" => "All" } } }
mutate { update => { "query_body" => "{%{query_body}" } }
}
}
output {
if "search" in [request] and "ignore_unmapped" not in [query_body]{
elasticsearch { hosts => "<ES MASTER NODE>:9200" }
}
}
packetbeat.yml
packetbeat.interfaces:
device: any
type: af_packet
packetbeat.flows:
timeout: 30s
period: 10s
packetbeat.protocols:
- type: http
include_body_for: ["application/json", "x-www-form-urlencoded"]
ports: [9200]
send_request: true
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "<KIBANA IP>:5601"
output.logstash:
hosts: ["<LOGSTASH IP>:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~