Filebeat consuming too much CPU

Hi,

Installed filebeat using docker in 3 servers.

In each of the servers, CPU reaching to 99% the moment filebeat starts harvesting.

Please suggest how we can control CPU usage

Can you please help on this?

You need to provide more context.

Please share your filebeat.yml for every server.

Hi @leandrojmp ,

Please find the filebeat.yml below:

filebeat.inputs:

-type: filestream
Id: Test log
Paths:
"/var/lib/test.log"
"/usr/share/test.log"
scan_frequency: 5m
close_inactive: 4m
ignore_older: 7d

There are so many log paths for which created so many input filestream

How many? What are the specs of the filebeat server? This may be the issue.

Suppose there are 15 filestream types given, under each there will be around 10 log paths mentioned.

The CPU is 2 cores in filebeat monitoring server, having memory of 8GB

@leandrojmp Any suggestions please?

So clearly according to this

That above is not your full filebeat.yml

So are you saying that you have ~150 Log Paths? 15 Inputs x 10 Paths?

Hi @stephenb ,

yes, I have around 150 log paths

When filebeat starts up it will try to read every log file... even the whole history... is that your desired requirement?

How much actual log file data are there? in term of GBs

And you said ~150 Logs paths are there more than 1 file per log path? do you use any * in the paths or files?

Do you want to ignore older files? Or Load them All?

What version of Filebeat and Elasticsearch?

What size resources (CPU, RAM etc) for filebeat?

Yes, it has to read all the log files one after the other but the whole history is not required.

It has to read only new logs every time. But if I have to look at history before 7 days, I should be able to get those logs.

There will be around 50GB log data.

I used the actual path of file and only for 20+ paths, used * in the paths

I want to load new logs everytime , but if I check the history before 7days, I should be able to get that.

Filebeat version: 8.1.3
Elastic search: 8.0.0

CPU is 2 cores in filebeat monitoring server, around 8GB memory

you can use ignore_older setting to skip the older files.. that can be set per input.

Then if you need to go back further you should be able to unset it on the input you want.

But yes is you start up filebeat with many "prospectors / harvestors" which is what you are doing it will consume CPU and RAM ... until it catches up that 50GB ...

How much GB / day once you catch up?

I used ignore_older and set it to 7d.

I didn't check the exact GB it is consuming but when filebeat container is up, within next few minutes CPU will be 99%

I mean how many GBs / day of logs... once the logs catch up... I suspect filebeat is just trying to catch up.... with the many logs paths / prospectors and volume...

It is taking around 17GB per day

Since you only have 2 cores in the host that Filebeat is running, you may want to try to limit it to only one core using the max_procs configuration in filebeat.yml Configure general settings | Filebeat Reference [master] | Elastic.

However, you may want to enable monitoring on Filebeat to ensure that Filebeat is able to catch up with the amount of logs you aim to ingest.

Hi @hendry.lim ,

Okay sure, I will try using the max_process and check how CPU is behaving.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.