If I add a new field in to my Logstash.conf, my old index logs recognize it?

Hello Everyone!

Doing the endless comparasion and questions between Splunk and Elastic. With Splunk you can extract all the time new fields from the logs that are already indexed, this is very helpful because if you don't considere a field at the first time when you parsed the logs, you can always extract them with regular expressions and they will appear in the search bar.

Reading the elastic docs, Kibana has a way to do this with the scripting fields, but this seems to be a pain in the ("you know") and it seems no to be such easy.

We already know how logstash works and how to extract fields, my questions are:

If we add a new field to logstash.conf that is running and we restart it, does the old logs that are already index will recognize the new field that we add? or only the new data that will come to elasticsearch will have this new field?

Best Regards!

Elasticsearch does not automatically index old data when new fields are added to its mapping. So, when you change your Logstash pipeline to send an additional data field, this will only take effect for new data. If the data can be extracted from older documents (e.g. the full message field was stored in _source), it could be extracted with an update-by-query operation, e.g. via a script or ingest pipeline. Note, though, that this is a potentially expensive operation and rarely performed if there is a large amount of logged documents. It's usually better to just have the additional data present in newer log documents, and rely on older data being removed through index life cycle management or similar mechanisms.

Roger that Magnus

Thanks a lot for your answer.

Best Regards!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.