I have some questions about the metrics filter plugin.
Does it respect the timestamp of the messages? Meaning, if I feed the metrics from different logs of different servers. One server for example has an issue and the log shipment is delayed. Will old metrics be updated or will the delayed log metrics will be aggregated to realtime logstash processing timesamp?
Same question, other scenery. My one and only logstash instance is down and restarted. Its now 10pm and the logmessage is from 8pm. Will this create a metric for 8pm or 10pm?
If metrics are also working on old datasets, is it possible to calculate metrics for events which are already in elasticsearch? I have following usecase in mind:
- We are currently using some aggregations in our dashboard base on raw log lines. Meaning I am counting incoming messages of an index, where all messages are stored - not using a metric yet. For longTimeStorage I only need a metric where I have min, max, avg count by 1 or 10 minutes for example. That would decrease the storage amount drastically. Unfortunately we often don't know which KPIs we will need to store longer when implementing a log parsing in logstash. And with the time the requirement comes up to have such a metric KPI for long time storage.
Is it possible to create different metrics based on if conditions? Meaning When I am parsing an event in logstash and I found out the event is for example a login, logout, dataRequestA, etc., can I say:
if login, then count it in metric myMetric_LoginRequest. If logout, the count it in metric myMetric_LogoutRequest, ...
When I played with the filter a long time ago, the metric was only initialized with the first countable event. Is it possible to initialize all known metrics with zero?
What happens, if I use multiple logstash instances parsing the same logTypes? We are using redis as message broker where all logstash instances are connected to. Do all logstash instaces are sharing the same metric?