Memory usage is at extreme high level

Hello World!

Memory usage seems a bit excessive for metricbeat, is it not? I

# systemctl status metricbeat.service 
● metricbeat.service - Metricbeat is a lightweight shipper for metrics.
   Loaded: loaded (/lib/systemd/system/metricbeat.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-09-04 16:27:07 UTC; 6h ago
     Docs: https://www.elastic.co/products/beats/metricbeat
 Main PID: 30807 (metricbeat)
    Tasks: 50 (limit: 4915)
   Memory: 4.7G
      CPU: 19h 9min 10.908s
   CGroup: /system.slice/metricbeat.service
           └─30807 /usr/share/metricbeat/bin/metricbeat -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.data /var/lib/metricbeat -path.logs /var/log/metricbeat

Sep 04 16:27:07 app11 systemd[1]: Started Metricbeat is a lightweight shipper for metrics..
# systemctl status filebeat.service 
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-09-03 06:08:46 UTC; 1 day 11h ago
     Docs: https://www.elastic.co/products/beats/filebeat
 Main PID: 8464 (filebeat)
    Tasks: 25 (limit: 4915)
   Memory: 22.0M
      CPU: 7min 1.198s
   CGroup: /system.slice/filebeat.service
           └─8464 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

Sep 03 06:08:46 app11 systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..
# 

another system similar to first...

# systemctl status metricbeat.service 
● metricbeat.service - Metricbeat is a lightweight shipper for metrics.
   Loaded: loaded (/lib/systemd/system/metricbeat.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2019-09-02 21:21:21 UTC; 1 day 20h ago
     Docs: https://www.elastic.co/products/beats/metricbeat
 Main PID: 14947 (metricbeat)
    Tasks: 31 (limit: 4915)
   Memory: 97.4M
      CPU: 12h 52min 42.526s
   CGroup: /system.slice/metricbeat.service
           └─14947 /usr/share/metricbeat/bin/metricbeat -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.data /var/lib/metricbeat -path.logs /var/log/metricbeat

Sep 02 21:21:21 app12 systemd[1]: Started Metricbeat is a lightweight shipper for metrics..
# systemctl status filebeat.service 
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2019-09-02 21:21:14 UTC; 1 day 20h ago
     Docs: https://www.elastic.co/products/beats/filebeat
 Main PID: 14721 (filebeat)
    Tasks: 26 (limit: 4915)
   Memory: 17.1M
      CPU: 1min 29.320s
   CGroup: /system.slice/filebeat.service
           └─14721 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

Sep 02 21:21:14 app12 systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..
# 

Please advise.

Hello,

Thanks for reaching about memory usage of metricbeat. Which version of metricbeat are you running? I found a similar post that details memory issues on 7.1.x releases. The issue in the original post looks like it was fixed with the 7.2.x release.

# metricbeat version
metricbeat version 6.8.2 (amd64), libbeat 6.8.2 [0ffbeab5a52fa93586e4178becf1252e6a837028 built 2019-07-24 14:33:55 +0000 UTC]
#

memory usage gets a little out of the hand (upwards 1G between hourly restarts), so meanwhile I did following "workaround" :

# crontab -l | tail -1
@hourly /bin/systemctl restart metricbeat.service
#

Alexus,

Can you paste us the config you're using? Has this started with 6.8.2?
Do you have a way of testing a newer version of Metricbeat to see if you run into the same memory issues?

I use "Beats central management", my config:

metricbeat.yml:

# grep -v ^# /etc/metricbeat/metricbeat.yml




management:
  enabled: true
  period: 1m0s
  events_reporter:
    period: 30s
    max_batch_size: 1000
  access_token: ${management.accesstoken}
  kibana:
    protocol: https
    host: x.x.x:443
    username: x
    password: x
    ssl: null
    timeout: 10s
    ignoreversion: true
  blacklist:
    output: console|file










# 

I just upgraded my beats to 6.8.3:

# metricbeat version
metricbeat version 6.8.3 (amd64), libbeat 6.8.3 [9be0dc0ce65850ca0efb7310a87affa193a513a2 built 2019-08-29 18:13:26 +0000 UTC]
#

If you're using CM, what modules and metricsets do you have enabled? Are you still seeing the issue with 6.8.3?

modules:

  • docker
  • rabbitmq
  • redis
  • system

in about hour since last time I restarted metricbeat.service:

systemctl status metricbeat.service output:

# systemctl status metricbeat.service 
● metricbeat.service - Metricbeat is a lightweight shipper for metrics.
   Loaded: loaded (/lib/systemd/system/metricbeat.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-09-11 15:00:15 UTC; 1h 5min ago
     Docs: https://www.elastic.co/products/beats/metricbeat
 Main PID: 8907 (metricbeat)
    Tasks: 30 (limit: 4915)
   Memory: 520.4M
      CPU: 37min 29.201s
   CGroup: /system.slice/metricbeat.service
           └─8907 /usr/share/metricbeat/bin/metricbeat -c /etc/metricbeat/metricbeat.yml -path.home /usr/

Sep 11 15:00:15 app11 systemd[1]: Started Metricbeat is a lightweight shipper for metrics..
# metricbeat version
metricbeat version 6.8.3 (amd64), libbeat 6.8.3 [9be0dc0ce65850ca0efb7310a87affa193a513a2 built 2019-08-29 18:13:26 +0000 UTC]
#

@alexus

Sorry I keep asking for info, it's a bit hard to debug from CM. What output are you using? Are you using the add_kubernetes_metadata processor?

Also, can you get a memory profile? You can get one by adding -httpprof localhost:6060 to metricbeat, and then downloading http://localhost:6060/debug/pprof/heap. Wait until you start seeing high memory use, and then download it.

Please... There is definitely no need to be sorry) I would love to help whichever way I can (help me, help you to help me)

I use elasticsearch for output and no I'm not using add_kubernetes_metadata (at least for now), how can I transfer heap file over to you?

@alexus,

It's been a while since I used the memory profiler. If it returns an image, you can just use the image attachment here. If not, maybe you can try a github gist, public s3 bucket or something like that?