Filebeats Kubernetes File close due to inactive of 5min

Hey,

Here is the setup currently have half working - Kubernetes - Amazon K8 - with a instance of logstash and file beats running on two pods - no demon sets for filebeats as want to read files from a specific directory on each of the pods (logs are stored in a volume mounted persistent volume- specifically azure file storage) - not interested in the pods docker stout. Also persistent volume to hold data from log files delta ids for redeployments as not to loose filebeats data deltas for files.

Currently filebeats is running and send the logs to logstash over a given port - can view these inputs in logshash logs. Issue is when the file becomes inactive after 5 minutes and new entries are not read. I have to redeploy in order to read the file again from last delta.

I see other threads and posts on this - stating to increase the timeout from 5min - this is not a solution. - When file is written to filebeats is simply not reading the new input. (be it 5min or 6 month timout - if microservice is called once every year even I should be entitles to read that new microservices log file via the filebeat)

Others saying that this issue is resolved after version 5.0 filebeats. Im using filebeats version:

docker.elastic.co/beats/filebeat:7.6.1

Using Azure Kubernetes version 1.17.3

Any ideas as to why filebeats is not reading new input after a file becomes inactive (file should become active again)?

Thanks

Little more background in case it helps - Microservices created in .net core - using log4net logging - file appender - this should not matter as its just creating a something.log file with a specific format for each log output.

Havent changed the scan_frequency - default is 10 seconds and close_inactive is still at its default - (5 mintues)

In theory once a log entry is entered every 10 minutes as example - it should appear to filebeat output every 10 minutes and 10 seconds - note logs dont come in every 10 minutes - just an example.

Extra info - in filebeat if I enable debug logging I can see the scan interval checking the file(s) in question

2020-03-25T08:39:33.230Z DEBUG [input] log/input.go:421 Check file for harvesting: /usr/share/filebeat/mslogshare/logging.log
2020-03-25T08:39:33.230Z DEBUG [input] log/input.go:511 Update existing file for harvesting: /usr/share/filebeat/mslogshare/logging.log, offset: 939
2020-03-25T08:39:33.230Z DEBUG [input] log/input.go:565 File didn't change: /usr/share/filebeat/mslogshare/logging.log

Offset of 939 but if I open the actual file logging.log I can see extra entries after char position 939 - not filtered out or anything - data after 939 is in same format as before 939

Im at a loss