Sorry for the wordy title.
We have a need for Logstash to merge and drop fields dynamically. We have to enforce strict mapping on the backend Elasticsearch index template in order to keep Elasticsearch performance as we've seen issues with memory and other Elasticearch contstraints when this is not set to strict for these kinds of environments.
We'd like to put the load on Logstash, which we can scale horizontally, to dynamically merge and remove fields we haven't encountered yet, which is quite frequent in this environment where we can't really control the logs that are coming in (a large scale Kubernetes deployment across the entire org).
As far as I've seen this doesn't seem possible. Right now as we encounter Logstash log messages that indicate a log has been dropped by the index due to strict mapping, we parse those logs for the new field, merge it into another field, and drop the field. These logs are all coming in JSON form and being parsed by the JSON codec FYI.
If you can't tell, this process is a little tedious and is hard to automate and makes for a really bad looking mutate filter in the Logstash filter. I'm wondering if there's a better way to accomplish this with Logstash.
If my explanation wasn't clear, I can share more of my index template and Logstash configuration.
Thanks in advance!