Filebeat restarting continuously with high memory usage on version 7.9.0

Hi,

Filebeat is deployed as daemonset in all kubernetes nodes and it's restarting continuously in specific nodes which is loaded due to high memory usage.

Filebeat version : 7.9.0

In filebeat.yml , tried to disable the "default_matchers.enabled: true" inside add_kubernetes_metadata processor (which is enabled by default) by :

processors:
    - add_kubernetes_metadata:
           default_matchers.enabled: false 

After this, the memory usage becomes low and the filebeat not restarted anymore.

Could anyone please help in getting the below details:

  1. Do disabling this default matchers processor have any impact on log processing and shipment to logstash/elasticsearch or in kibana UI visualization?

  2. Why this default matchers processor is causing very high memory usage?

  3. Also, in kibana could see the log events are available with metadata info even after disabling default matcher processor. So, what's the actual use of
    default matcher processor?

  4. Is there any other way to decrease the memory usage caused by metadata attached to events rather than disabling default matchers processor ?

Hi Team, Could anyone please help here?

Hello,

Filebeat 7.9.0 is pretty old and not supported anymore, you are missing years of updates and bugfixes.

You should upgrade to a supported version and see if the issue persists.

Since you are using Filebeat 7.X, it will be easier to upgrade to Filebeat 7.17.14, which is the more recent and supported release on branch 7.

Okay sure..

Asking just for my knowledge, In 7.9.0 after disabling "default_matchers" in add_kubernetes_metadata processor will filebeat be able to collect metadata info ?

I have no idea, I do not use Kubernetes, but looking from the documentation I would assume that you will not have any metadata.

Just check this part of the documentation:

default_matchers.enabled
(Optional) Enable or disable default pod matchers when you want to specify your own.

If you disable and do not specify your own matchers to get the metadata, I would not expect any metadata.

But even after disabling default_matchers, I'm able to see metadata info from kibana. Bit confused here.

Will the kubernetes provider adds metadata by default in filebeat 7.9.0 without add_kubernetes_metadata processor or it won't. Could anyone please confirm this ?

Did you disable the default_indexers as well?

According to the documentation, this processors has two building blocks, indexers and matchers, so I'm assuming you need to disable both as mentioned in the documentation.

This behaviour can be disabled by disabling default indexers and matchers in the configuration:

processors:
  - add_kubernetes_metadata:
      default_indexers.enabled: false
      default_matchers.enabled: false

See if changing this solves your issue, if not, please upgrade to a supported version and check if the issue is still happening.

It may be a bug, this version is 3 years old.

Have only disabled the default_matchers, and the high memory issue is resolved.

Not provided any custom matchers also, but getting metadata from kibana.

Need to understand how the metadata is being added to events. Also, disabling this default matcher have any internal impact on log processing.

Please, check the linked documentation, the add_kubernetes_metadata has two building blocks, indexers and matchers, you disabled just one.

The documentation explain what they are and what kind of metadata each one of them can get.

You dont need to enable it again if you are using filebeat.autodiscover

Please check it and let me know @kait

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.