Filebeat: large json log will case OOM killed by k8s

Hi,

Currently, I met a problem with the json log with max_bytes limit.

filebeat.yml

    # 100KB
    max_bytes: 102400

    json.message_key: message
    json.add_error_key: true

then I run it on k8s with memory limit: 200Mi

when one large JSON log: 1 single line, 50M, JSON format.

The filebeat will be killed by k8s because of the memory. And rescheduled, start, then be killed again.

I read the doc, and is the process order decode the JSON -> multiline -> max_bytes?

Is there any other way to avoid this, truncate the log into max_bytes before JSON decode?

thanks!

Yes, it's like that

You can create an enhancement request with this problem and see if we can come up with a better solution.

Thank you! It helps a lot!

I have added few codes to support the feature maxLineBytes limit. (the value of maxLineBytes from config is not added yet)

I have already done some test manually, and put that into our testing environment.

I'm wondering if that a common issue, should I take more time to finish it and make a pr

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.