I am working with Logstash UDP Input Plugin and Elasticsearch output plugin and using Persisted Queues.
I noticed that the @timestamp I receive from Logstash into Elasticsearch, even in the event of internet outage that force Logstash to retry after x number of seconds, is the time of initial processing.
Is there any way to ingest the timestamp of the last attempt made?
This would give me metrics between the data accepted and the data sent to elasticsearch.
Not sure how does look the message in your case, however you can always overwrite @timestamp with the date plugin. Normally, the timestamp reference in Kibana should be time from the the source, if that field exist in the message.
In case do not exist, you can copy from @timestamp and keep in [@metadata][timestamp], the copy back to @timestamp.
@Rios as I said, I noticed that the @timestamp I receive from Logstash into Elasticsearch, even in the event of internet outage that force Logstash to retry after x number of seconds, is the time of initial processing.
This is my problem.
I have a timestamp but it isn't accurate if there is an internet outage that force Elasticsearch output plugin to retry the delivery.
Logstash will set the timestamp when it creates the event. You can override that with a date filter. It doesn't change the timestamp if it has to hold the event in a queue.
If you want the timestamp to reflect the time when an event is indexed then I would suggest having elasticsearch add the timestamp, not logstash. This thread might help with that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.