The idea is as following : abstract the I/O part (input & output), to be able to focus on the productive part : the filter = the parsing of the log in itself. Once the filter set-up and working for the log sample you provided, you only need to apply it in your Logstash instance(s) (either manually, or though the central pipeline management)
It's not some clean code, I made some questionable choices ; but it work, and should be pretty stable.
So if that can permit you to speed-up your development process (especially for newcomers in the Logstash world), feel free to check it out, and don't hesitate if you have any questions or suggestions !
Thank you very much, that's why I posted it here, I'm willing to help a maximum of person with this little tool
If you want to help me in a concrete way, feel free to let a like on the Github repository, and report any bugs you faced or improvements you would like!
For now yes, the mini 'tuto' is available on the here
If the requirements are met (docker / docker-compose), should work flawlessly
I migrated to full docker for security reason (live demo), was only some raw Logstash cli exec before that, but will probably maintains in the futur the two solution (Docker & non-Docker), as Docker slow the process down ~20-30%
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.