Excessive disk usage ( ~80M/s) filebeat

Hi, I am using filebeat 7.6.0 vesrion on each server. I have a 3 node elastic cluster 6.8.16 version and Graylog version 4.0.8-1. In one run, I found that filebeat writes to disk (~ 80 Mbps), although normal writes (~ 20 Mbps - ~ 30 Mbps). I cannot understand why such unusual jumps are made and how to deal with them? When recording at almost 80 Mbps, other services running on each server start dropping due to lack of RAM.

Example filebeats config:

# Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}
filebeat.inputs:
- input_type: log
  paths:
    - /mnt/as/${user.as_build}/Logs/${user.date_folder}/*.txt
  fields:
    service_type: as
  scan_frequency: 5s
  tail_files: true
  type: log
  multiline:
   match: after
   negate: true
   pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  ignore_older: 1s
  clean_*: true
  clean_inactive: 8s
  close_removed: true
  clean_removed: true

- input_type: log
  paths:
    - /mnt/bs/${user.bs_build}/ErrorLogs/${user.date_folder}/*.txt
  scan_frequency: 5s
  fields:
    service_type: bs
  tail_files: true
  type: log
  multiline:
   match: after
   negate: true
   pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  ignore_older: 1s
  clean_*: true
  clean_inactive: 8s
  close_removed: true
  clean_removed: true 
output.logstash:
    hosts: ["0000:0", "0000:0", "0000:0"]
    loadbalance: true
    index: filebeat
#registry.flush: 10s
path:
  data: /var/lib/graylog-sidecar/collectors/filebeat/data
  logs: /var/lib/graylog-sidecar/collectors/filebeat/log
tags:
- linux
fields:
  dc: 

Thanks for your help, maybe you need more information?

Hi!

I'm not sure what could be the problem here. You mention that you see increased disk usage but that also processes/services are dropping because of RAM. Can you elaborate more on how these two are related to each other?

Also did you consider upgrading to a latest version?