Hello,
We have an ecosystem consisting on several Filebeat services configured in several virtual machines, having every of them ingesting events either from files or from Azure EventHubs. Then those Filebeat services send those events to a Logstash, which finally ingests those events into our Elastic cloud deployments. Some Filebeat services are installed in the same machine as Logstash. All machines are Windows-based.
All Filebeat services process events for several sources, corresponding to several of our clients.
As we use a shared infrastructure between our clients, we would like the fraffic generated by some of them not to impact others.
Is there any way of limiting the resource consumption used by every ingestion flow in Filebeat so that they work "isolated" and don't consume more resources than expected and, so, don't impact the ingestion rate for other clients?
We would like to know if this "resource limitation" is possible to be implemented for both log file and Azure EventHub inputs.
Moreover, is there any way of performing such hard limit also in Logstash at pipeline level?
Thank you very much for your help.
Best regards,
Roberto Rodríguez.