Here is the setup currently have half working - Kubernetes - Amazon K8 - with a instance of logstash and file beats running on two pods - no demon sets for filebeats as want to read files from a specific directory on each of the pods (logs are stored in a volume mounted persistent volume- specifically azure file storage) - not interested in the pods docker stout. Also persistent volume to hold data from log files delta ids for redeployments as not to loose filebeats data deltas for files.
Currently filebeats is running and send the logs to logstash over a given port - can view these inputs in logshash logs. Issue is when the file becomes inactive after 5 minutes and new entries are not read. I have to redeploy in order to read the file again from last delta.
I see other threads and posts on this - stating to increase the timeout from 5min - this is not a solution. - When file is written to filebeats is simply not reading the new input. (be it 5min or 6 month timout - if microservice is called once every year even I should be entitles to read that new microservices log file via the filebeat)
Others saying that this issue is resolved after version 5.0 filebeats. Im using filebeats version:
Using Azure Kubernetes version 1.17.3
Any ideas as to why filebeats is not reading new input after a file becomes inactive (file should become active again)?