Hey there,
to investigate possible issues I used to collect slowlog entries.
Obviously, query
field is very important but I saw during these years that there isn't any easy way to analyze it. I mean, generally I use a Logstash instance that get messages from pub/sub queue and then use grok filter
to simplify the read.
This approach is prone to out-of-date query
usage, since if the source query changes I will get a grokparsefailure
tag.
Is there any suggest?
For example, using ingest pipeline
will give me the chance to use json
processor but it will make the ingested document not easy to be read.