I want to use Filebeat for reading and processing log files, which consist of fixed size structures. In other words, each consecutive 120 bytes in that kind of files represent new chunk of data.
I want to read them and slice into a fields using processors.
I want to develop a new reader Chunk and add it to harvester's chain of readers:
limit -> (multiline -> timeout) -> strip_newline -> json -> encode -> (line XOR chunk) -> log_file
This reader will yield new chunks of fixed size and forward them to further steps.
What do you think of this idea, is it the right approach to solve initial task? Does anyone else need capability to read fixed structures from log files?