Task:
I want to use Filebeat for reading and processing log files, which consist of fixed size structures. In other words, each consecutive 120 bytes in that kind of files represent new chunk of data.
I want to read them and slice into a fields using processors.
Idea:
I want to develop a new reader Chunk and add it to harvester's chain of readers:
This reader will yield new chunks of fixed size and forward them to further steps.
What do you think of this idea, is it the right approach to solve initial task? Does anyone else need capability to read fixed structures from log files?
Is this 'fixed' chunk all ASCII, or is some binary in there as well.
I was hoping to - one day - make the reader chain configurable. We don't want full parsing support, but chunking and different line splitting/multiline strategies could be implemented and reused in filebeat modules more easily.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.