Filebeat to lagstash to kafka or Filebeat to kafka for >= 10,000 servers

Hi,
We have a project to fingerprint logs from 10,000 servers and I am in process of designing a system for it.
Looking at the design guide https://www.elastic.co/blog/just-enough-kafka-for-the-elastic-stack-part1
Filebeat in version 5.x would support writing directly to Kafka. Do you think this is the better way than letting the filebeat clients write to logstash to kafka and then let logstash extract from kafka and inject into elasticsearch ?

as you want to forward logs via kafka anyways, the only good reason for having beats->logstash->kafka is, if this logstash instance is going to modify the events to be written.

@steffens
Yup, logstash is going to modify the logs.
I will be testing beats soon in our environment and if all goes well (resource utilization), I will be deploying it to few thousand servers. Could be an interesting use case for other folks here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.