I have a question about the metric filter. I played a bit with it some time ago, but I was not able to fit it to my needs. Maybe I just made some mistake.
I am having a log, with a timestamp. Timestamp is steadily increasing in the log.
This log is about 8GB per day big.
I do not need to access each log message afterwards, I am just interested in some metrics per minute. I would like to save storage if I do not really need it.
The log line contains following informations:
- timestamp of log message
- processing time of the message
- type of the message
- type (the created by filebeat)
No I would like to create following metrics:
- avg / fixed percentiles of processing time grouped by message type within the last minutes based on timestamp of log message
- count of messages grouped by message type within the last minutes based on timestamp of log message
I have following requirements which are important for me:
- the metric must be based on the log-Time, not on the time the message is processed on logstash. A downtime of ELK (with delayed processing) must not influence the metric values.
- the metric should start, even if no messages a specific type are received since logstash start. I expect a zero metric then. I am able to whitelist the message types. (but I think this is not possible with the fullfilment of my first requirement)
What are your thoughts about it? Is the metric filter the right way to achieve the goal?
How can I do it?