Hi,
I'm using LS 6.0.0 and ES 6.0.0 and my SinceDB files are only created when Logstash reaches the end of the file. In my use case my log files are about 10GB each and I've configured the file input to scan from the "beginning", which leads to the .sincedb file not updated or created when Logstash has not reached the EOF yet.
This is a big problem for me when my EC2 instance would abruptly restart or if I was to stop the docker container to make a configuration change, causing Logstash to parse the file over again instead of continuing where it last left off.
Another user already created an issue exactly like this about 2 years ago and it just got closed with: " We are building a new input that is designed from scratch to properly support the reading of many files and will take this request onboard for this new input."
Is this issue going to be addressed any time soon, or should I just go and modify the tail.rb source code?
Otherwise I suppose I will have to redesign my Logstash flow to use persistent disk queues and read from like a syslog/filebeat stream directly.
Assistance in this regard would be highly apprecated