I am currently testing with filebeat (run-once) + logstash's aggregate plugin, but it feels cumbersome and a bit hacky. Any ideas or hints on how to parse this data format are welcome.
Just want to note that I do not really have restrictions on how the data gets into Elasticsearch. Whether it's through beat, logstash, a plugin, an ES pipeline or a custom script, I'd just prefer the simplest (~ most elegant) option.
Concerning ES ingest node, it is not done for this kind of use.
I mean : ES ingest node is done to process "simple" cases where all lines are processed in the same way.
For example, there is no if/else statement in ES ingest node.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.