Filebeat memory leak?

Hi,

I've set up a filebeat service running on windows 2012 to ingest some application log files to elasticsearch. Feeding path is like this: filebeat -> logstash -> ElasticSearch. After running about 2-3 days, from performance monitor, I can see its memory utilization, keeps increasing, from the beginning ver low a few 10Ks to now almost 1Gb. wondering what caused this? Is it a leak, or something caused by miss configuration/source log files? There is also about 5-10% CPU increase.

I also noticed there are errors in filebeat log file.
2016-03-02T16:33:32+08:00 ERR Failed opening xxx.log: Error creating file '%!s(*uint16=0x33542d20)': Access is denied.
2016-03-02T16:33:32+08:00 ERR Stop Harvesting. Unexpected file opening error: Error creating file '%!s(*uint16=0x33542d20)': Access is denied.
Filebeat service have read access to application log folder. And I found the one locking the file above (in the error log) is actually filebeat service itself. My application will create a new log file when the log file size hit certain limit, e.g. 5MB, then after some time, a script will kick in to move the old log file to filer. After filebeat runs for 2-3 days, quite a few old logs are stuck in the folder, and I can't even open the file with error file in use. But once I stopped the filebeat service, those old files were quickly moved to filer. I don't understand why, can some explain to me how filebeat works, and what might caused this error? thank you very much!

a copy of filebeat configuration

filebeat:
prospectors:
paths:
- //ApplicationFolder/AppNameThread.log --total 10 different folders are configured here

  input_type: log
  document_type: ApplicationName

  multiline:
    pattern: ^[0-9]{2}-[a-zA-Z]{3}-[0-9]{4} [0-9]{2}:[0-9]{2}:[0-9]{2}
    negate: true
    match: after
    max_lines: 10000
  tail_files: true

registry_file: "C:/ProgramData/filebeat/registry"

output:
logstash:
hosts: ["elkserver:4501"]

logging:
files:
path: E:/log/filebeat
name: sigma.log
rotateeverybytes: 10485760 # = 10MB
keepfiles: 7
level: warning

For the second issue, you might want to enable the force_close_older setting. Perhaps the memory increase is caused by that as well. Can you give it a try and see if it solves the leak?

Thank you for your reply.

I can't find this option force_close_older in filebeat documentation, i'm using version 1.1. There are two settings seems close,
force_close_fles: this doesn't apply to my scenario, as the file name never changes once it's create
ignore_older: are you referring to this setting? In my case, scan frequency is 10s, if I set ignore_older to 10mins, i should not loose any data, correct?

I'll give ignore_older a try, see how it goes..