FileBeat Elasticsearch module not closing files after rollover

Hi,

I just had a strange issue, I had enabled the Elasticsearch module in FileBeat. Today, during a healthcheck, I saw that my coordinating node had over 90% disk utilization.

On logging on and validating with du -sh I could not add up the disk usage. I ran lsof +L1 and found out that a lot of files were kept open by FileBeat even after they were rolled over by Elasticsearch.

I have now added the following lines to my modules.d/elasticsearch.yml:

- module: elasticsearch
  
  server:
    # Added now to remove old files #
    input:
        close_renamed: true
        close_timeout: 5m
    enabled: true
    #################################

    # Server log
    var.paths:
      - /var/log/elasticsearch/irmelk.log

Now I see that filebeat clears the file when its done with it and does not keep the context open. Is this the correct approach to solve the issue?

Yes, it is. :slight_smile:

Thanks. Why can't this be the default config shipped?

Makes it a more plug and play kind of a solution, considering it is an in-house component I believe.

The main reason is if close_renamed and close_timeout is set, data loss might occur. As Filebeat closes the file, it is possible that it misses events before the file is removed/renamed, if it cannot reopen the file in time due to the configured scan_frequency.

But in this scenario, is there a way to avoid this data loss? Or is it acceptable?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.