Metricbeat using more than expected memory

I want to use metricbeat to track logstash and have followed the instructions here:https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html. Everything worked but metricbeat is using 1.3gb of ram to track logstash which feels very high for a "lightweight shipper". This is causing my logstash to fail because it runs out of memory to use (because metricbeat is taking it all).

Is there a way to limit metricbeat's memory or is there any suggestion? I'm using Metricbeat 7.5.2, logstash 7.6.0 and ES 7.5.2.

How are you measuring this usage?

Looking at a top command as well as a systemctl status metricbeat shows the usage. It starts at a low usage, but gradually increases to about 1.3 gb

Posting what you are seeing would be useful. But please don't post pictures of text, they are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

Response to systemctl status metricbeat -l


● metricbeat.service - Metricbeat is a lightweight shipper for metrics.
   Loaded: loaded (/usr/lib/systemd/system/metricbeat.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-12-01 05:32:10 UTC; 8min ago
     Docs: https://www.elastic.co/products/beats/metricbeat
 Main PID: 29113 (metricbeat)
   Memory: 1.1G
   CGroup: /system.slice/metricbeat.service
           └─29113 /usr/share/metricbeat/bin/metricbeat -e -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.data /var/lib/metricbeat -path.logs /var/log/metricbeat

Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.601Z        INFO        [index-management]        idxmgmt/std.go:256        Auto ILM enable success.
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.603Z        INFO        [index-management.ilm]        ilm/std.go:138        do not generate ilm policy: exists=true, overwrite=false
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.603Z        INFO        [index-management]        idxmgmt/std.go:269        ILM policy successfully loaded.
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.603Z        INFO        [index-management]        idxmgmt/std.go:408        Set setup.template.name to '{metricbeat-7.5.2 {now/d}-000001}' as ILM is enabled.
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.603Z        INFO        [index-management]        idxmgmt/std.go:413        Set setup.template.pattern to 'metricbeat-7.5.2-*' as ILM is enabled.
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.603Z        INFO        [index-management]        idxmgmt/std.go:447        Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.5.2 {now/d}-000001} as ILM is enabled.
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.603Z        INFO        [index-management]        idxmgmt/std.go:451        Set settings.index.lifecycle.name in template to {metricbeat-7.5.2 {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.606Z        INFO        template/load.go:89        Template metricbeat-7.5.2 already exists and will not be overwritten.
Dec 01 05:40:28 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:28.606Z        INFO        [index-management]        idxmgmt/std.go:293        Loaded index template.
Dec 01 05:40:40 ip.ec2.internal metricbeat[29113]: 2020-12-01T05:40:40.678Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1230,"time":{"ms":36}},"total":{"ticks":15810,"time":{"ms":551},"value":15810},"user":{"ticks":14580,"time":{"ms":515}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":18},"info":{"ephemeral_id":"10f51b33-c981-4732-9956-cea04b277ddb","uptime":{"ms":510088}},"memstats":{"gc_next":1473010480,"memory_alloc":1004669720,"memory_total":2687763688,"rss":3563520},"runtime":{"goroutines":65}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"bytes":2783},"write":{"bytes":1492}},"pipeline":{"clients":5,"events":{"active":1832,"published":110,"retry":17,"total":110}}},"metricbeat":{"logstash":{"node":{"events":57,"success":57},"node_stats":{"events":6,"success":6}},"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":1,"success":1},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":9,"success":9},"process":{"events":21,"success":21},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3}}},"system":{"load":{"1":0.19,"15":0.11,"5":0.15,"norm":{"1":0.095,"15":0.055,"5":0.075}}}}}}

Response to top

18991 root      20   0  668792  35312   8820 R   4.3  0.9   3845:10 filebeat
27608 logstash  39  19 5464584 1.284g  19084 S   3.0 35.2  11:33.26 java
29113 root      20   0 2096600 1.381g  30076 S   1.3 37.9   0:19.27 metricbeat
    1 root      20   0  128180   5652   2888 S   0.0  0.1  38:58.29 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:05.83 kthreadd
    3 root      20   0       0      0      0 S   0.0  0.0   1:13.16 ksoftirqd/0
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    7 root      rt   0       0      0      0 S   0.0  0.0   0:14.70 migration/0
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_bh
    9 root      20   0       0      0      0 S   0.0  0.0  14:55.16 rcu_sched

This is my metricbeat.yml (I also have cloud.id and cloud.auth defined):

metricbeat.modules:
- module: logstash
  metricsets: ["node", "node_stats"]
  enabled: true
  period: 10s
  hosts: ["localhost:9600"]

@warkolm I decided to delete the metricbeat index that it was going to and that seems to have been the problem. It wanted the index to be named differently I think and things just got backed up in the queue. Is there any way to limit that queue to a certain amount of MB? Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.