I was previously using elapsed filter in my logstash config to calculate time difference of start and end event. Recently I have replaced elapsed filter's code with elasticsearch output filter and painless script to do the same calculation.
However when our infrastructure team has deployed the new logstash config, we're seeing some events on Kibana have old elapsed_time field and its associated tags and some events are showing new fields that I have recently introduced!
The config was deployed on two logstash hosts and I checked thoroughly and couldn't find any of the configs including mine that has reference to elapsed filter.
Data flow is like this:
filebeat -> haproxy VIP -> 4 haproxy -> 2 shipper -> 4 kafka -> 2 logstash instances where my code is configured -> elasticsearch
Our ELK and logstash version is 7.6.2
Logstash service was restarted but we haven't rebooted the server.
Are you aware of any bug that allows logstash to run old code from cache? How to resolve this issue?
When you add new fields, did you already refresh the Index pattern on the particular index?
The old documents in the index still there, not changed, and the new douments will follow your new config.
Other records contain only old fields, elapsed field and tags.
I did a “grep -ri elaps <elk install/config folder” and I couldn’t find any logstash config including other projects are using the plugin. I could only find elapsed plugin in cache folder, may be those were not removed automatically after we had uninstalled the plugin
I think similar problem occurred in version 5 and when team using multiple pipelines so old code could still present in ‘combined’ but I couldn’t find similar compliance raised for version 7
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.