Filebeat using accessive amount of memory on server

Hello Team,

I am using fileebat 6.5.1. We have server that produce large volume of data per day(75 gb).
I am facing memory issue on this server for filebeat. Filebeat is keep on increasing memory usage every day almost 7GB.
can you please suggest some configuration to overcome this issue.

This is my input configuration:

#=========================== Filebeat inputs =============================


Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    enabled: true

    Paths that should be crawled and fetched. Glob based paths.

    #- /var/log/*.log

    • D:\DATA\LogFiles**

    Exclude lines. A list of regular expressions to match. It drops the lines that are

    matching any regular expression from the list.

    #exclude_lines: ['^DBG']

    Include lines. A list of regular expressions to match. It exports the lines that are

    matching any regular expression from the list.

    #include_lines: ['^ERR', '^WARN']

    Exclude files. A list of regular expressions to match. Filebeat drops the files that

    are matching any regular expression from the list. By default, no files are dropped.

    exclude_files: ['^D:\DATA\LogFiles\GHS\','^D:\DATA\LogFiles\AutoSys\','^D:\DATA\LogFiles\Flogger\SYSTEM\','\b(\wgmt_c\w)\b', '\b(\wemp_c\w)\b','\b(\wLiveCacheLoader\w)\b','\b(\wLiveCacheIndexer\w)\b','\b(\wHouseKeeping\w)\b','.zip$','.cal$','.gz$','.xml$','.txt.\d','\.ser','.mdmp$']
    ignore_older: 72h
    close_removed: true
    clean_inactive: 72h5m
    close_timeout: 5m

    Optional additional fields. These fields can be freely picked

    to add additional information to the crawled log files for filtering


    level: debug

    review: 1

    Multiline options

    multiline.pattern: '^[0-9]{2} [A-Z]{1}[a-z]{2} [0-9]{4},'
    multiline.negate: true
    multiline.match: after
    multiline.max_lines: 3000

Do you process a large number of small files per day, or a small number of large files per day?

Memory leaks in on harvester startup/shutdown have been resolved in filebeat 6.8.1.

there are number of process running on that servers. Each process keep on creating log file on average say in every hour by log rotation. For example when file is 5MB it will create rotation files. Almost producing 70 GB data per day. registerty file size is almost 14MB.
I have started filebeat yesterday (started with memory utilization 163MB)and today it reach upto 5703 MB. If this continue we will face disk running out of space.

Is the configuration is correct ? Can you suggest proper configuration to handle this issue.

Is there no other way other then moving to 6.8.1 ?

Can you please also advise that close_timeout: 5m is the correct configuration ? as per his description on fileebeat docs it will close the file handler after given interval irrespective of the reading position in file. We have applied this setting but still facing memory issue. Does this means it is not closing handler properly ?

This scenario sounds like you have a many many small files. About 1400 files per day.
This is quite a lot. Is there a particular reason to rotate at 5MB? How about switching to 1GB (have 70 files per day)?
The fixes in 6.8.1 should fix a memory leak exactly in this scenario: many many small files with rather short processing time.

For how long to you keep these files around? How many files do you have on disk?

14MB registry file is hughe. Sounds like loads of CPU time is spend when the registry file snapshot is written. Does the regsitry file hold files not on disk anymore (registry file is in JSON, you can easily parse it)?

A low close_timeout is ok. If the file is still on disk and the offset does not point to the end of file, then filebeat will pick it up again and continue from last known position. The memory leak found in filebeat is triggered by closing files often, though. Best way to mitigate this is: reduce number of files to process in total and make sure we don't need to close early.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.