Filebeat Consume High Amount of Memory

Hi,

Filebeat on one of our machine is consuming high amount of memory.
image
Our filebeat configuration is like this below :

- type: log
  paths:
  - /var/log/fmw/app/portal/portal-api.log
  include_lines: ['^\d{4}-\d{2}-\d{2}']
  close_inactive: 24h
  close_removed: true
  clean_inactive: 26h
  clean_removed: true
  scan_frequency: 1s
  ignore_older: 25h
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after
  timeout: 5m
  fields:
    log_type : mobile_portal_api
  fields_under_root : true

Filebeat is tracking 13 log file in this server with about 30 million of document sent daily.

Log from filebeat is :

2018-10-17T01:06:57.163+0700	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":9605870,"time":9605871},"total":{"ticks":77359460,"time":77359467,"value":77359460},"user":{"ticks":67753590,"time":67753596}},"info":{"ephemeral_id":"65d63e03-a732-473e-a520-2cb14f2cde0e","uptime":{"ms":1343130030}},"memstats":{"gc_next":18316201328,"memory_alloc":13287472672,"memory_total":9897155880544,"rss":3403776}},"filebeat":{"events":{"active":101,"added":2057,"done":1956},"harvester":{"open_files":689,"running":689,"started":14}},"libbeat":{"config":{"module":{"running":13},"reloads":3},"output":{"events":{"acked":1998,"active":69,"batches":30,"total":2067},"read":{"bytes":180},"write":{"bytes":123612873}},"pipeline":{"clients":13,"events":{"active":215,"filtered":14,"published":2043,"total":2057},"queue":{"acked":1883}}},"registrar":{"states":{"current":687,"update":1895},"writes":36},"system":{"load":{"1":2.16,"15":0.59,"5":1.14,"norm":{"1":0.36,"15":0.0983,"5":0.19}}}}}}
2018-10-17T01:06:58.459+0700	INFO	log/harvester.go:216	Harvester started for file: /var/log/fmw/app/credo-integration/credo-integration.log
2018-10-17T01:06:58.465+0700	INFO	log/harvester.go:233	File was removed: /var/log/fmw/app/credo-integration/credo-integration.log. Closing because close_removed is enabled.
2018-10-17T01:06:59.460+0700	INFO	log/harvester.go:216	Harvester started for file: /var/log/fmw/app/credo-integration/credo-integration.log

I restarted Filebeat service but the same problem happened again.
Please give me advice on how to avoid this.

Thanks

From metrics filebeat is processing 689 files.

How big are events in these files? Do you have any long stack-traces?

I see you have 13 modules enabled? Can you share your complete filebeat logs?

The RSS is ~3MB, yet the next GC limit is at ~18GB. This looks like you have seen some very large events from a many files at about the same time.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.