My application produces logs in ECS format where loggers write log level as a flat field "log.level". I use Filebeat 7.17.5 to ship logs into Elasticsearch. As stated in the documentation I set ndjson.expand_keys to true:
Your configuration looks correct, I've tried to reproduce it but did not manage to. So my guess is that there might be some small issue on your configuration, probably some indentation. Try to double check it.
Here is the configuration I used to test (and worked fine):
If you still don't find the cause of the issue, then, please post here your whole input configuration, including the filebeat.inputs bit (just redact the sensitive information, if any).
Thank you for the quick reply. I've checked my configuration using validator -- it's correct. Also Filebeat logs don't have any complains. What I forgot to mention is that I ship logs using application-specific module. Here is a redacted version of fileset input config:
I can't seem to find anything wrong with your config, I even copied and pasted it into a module and it worked for me.
Try looking errors on Filebeat's logs, maybe there it can give us some insights.
Is there other processor/pipelines configured on your module or globally that could also be interacting with those events/fields?
Another thing you can try is to run Filebeat with log debug enabled and look for the Publish event: messages, the whole event sent to Elasticsearch will be there, that way we can see exactly what Filebeat is outputting.
While I was adding/testing other stuff, I needed to reinstall/restart Filebeat and now when I'm looking at the logs, they look good, i.e. keys are expanded. Might be that Filebeat somehow didn't pickup module changes (although we have reload enabled). Anyhow, the issue is not actual anymore. Thanks for you support, @TiagoQueiroz !
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.