Control volume of data sent via filebeat

Is there a way to control volume of data sent via filebeat? There could be a scenario where a process can spin out of control and generate huge logs/ exceptions. We do not want to send this redundant huge data to elastic cluster and overloading it.
If we had a parameter which could help define threshold on number of lines/ volume of data should be sent per time unit can help us avoid such disasters.

There's no way to do this at the moment. Any back pressure that Filebeat might encounter will be from Elasticsearch, which will accept thing as fast as it can.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.