Calculating delay of sending/processing delay of events

We currently have an ELK cluster which supports multiple tenants and each tenant sends their logs to our Logstash cluster via the Filebeat shipper. We try to provide our tenants with stats of what the average processing delay is for their pipeline (each tenant has their own lumberjack input) although this approach isn't completely accurate, and requires additional setup.

We would like a way to provide tenants with more accurate stats that could be built in to each event being shipped by Filbeat. For example, say there was an option "enable_event_stats: true" which could be specified in the prospector YAML config, each event could have a "_filebeat_sent_time" field added to it with a timestamp corresponding to the time just prior to it being sent. Once the data reaches Logstash, we could simply calculate the delay between the current time and the "_filebeat_sent_time" timestamp on the event.

Is this a feature that could be added? It would simplify things in our case and I'm sure others would take advantage of this if it were available.


Doesn't Filebeat add a @timestamp field when it reads the file? If so you can subtract that timestamp from the current timestamp on the Logstash side to get the delay. You probably need a ruby filter for this.

Thank you @magnusbaeck, I didn't realize that filebeat added a '@timestamp' field to events the prospector picks up. From what I can tell from looking in the Logstash source code, this line indicates Logstash only adds it's a '@timestamp' field if that value isn't already set?

right, filebeat (and all other beats) do send events with @timestamp being set. It's the time the events was originally acquired (time filebeat did read line from file). With @timestamp already set, it will not be overwritten by logstash.