Best Practices for Filebeat

Dear Elastic Team,

I want to know the best practices for Filebeat configuration. Actually, there is one Production server where I have to setup filebeat. I want to configure the filebeat in that way so that when I will start the filebeat service, it won't consume high volume of CPU and Memory. I don't want that filebeat will harm the production server in any way.

The logs that filebeat will ship is around 4 gb per day. I need your guidance here.

Thanks in Advance !!

You didn't say if your server is Linux or Windows.

The default Linux config shouldn't have any performance issues shipping 4Gb a day.

Install from packages and use systemctl to start the service at boot. Have a reasonable test environment before making production changes. If you have a fleet of similar servers, configure all with something like Ansible to ensure the desired configuration state.

"Not harm the server in any way" is open to interpretation, no matter what, if something happens on that server after you install filebeat, in many installations, you will be blamed even it's not related :slight_smile:

Many thanks Rugen for the response :slightly_smiling_face::+1:

Although for safer side, I have updated max_proc to 1. Hope it will be able to ship the data smoothly.
One more Q. How much data can be ship by filebeat in following cases :.

  1. If max_proc updated to one.
  2. If max_proc is in default status.

By the way it's Linux server and filebeat v.7

Thanks again...

I've never changed that option, but it might provide a throttle if you are harvesting multiple files or a burst of events occur.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.