One of our Business Application was developed and still supported by an external company, what we would like to do is use Filebeats to send this application logs to our own instance of Logstash as well as the External Company’s Logstash instance. We have tried adding multiple “output.logstash” configuration outputs inside the filebeat.yml file. The config validates but Filebeat only send logs to the last configured Logstash instance. My questions below:
· Is there a workaround to get this working, without running two Filebeat instances on one Host?
· If not yet supported is this something that will be supported in future in Filebeats? I see there are a few questions around this functionality on your support site and suggestion to use Kafka.
Example:
output.logstash:
The Logstash hosts
hosts: ["10.10.1.5:5044"]
Network timeout in seconds.
timeout: 30
output.logstash:
The Logstash hosts
hosts: ["196.2.0.1:5043"]
Network timeout in seconds.
timeout: 30
What we have done for now is to send all the application logs to our own instance of Logstash; then have two outputs, one to our Elasticsearch Cluster and one to the Support Companies Logstash instance.
Solving this with having logstash push to two outputs has the same issue as filebeat potentially outputting (replication) to two logstash instances. If one data sink is down, the other won't see any traffic.
What you're asking for is some form of replication.
Some potential solutions to the problem:
run multiple filebeat instance, each with different registry file, to have separate state for each cluster to send traffic too.
have filebeat push to kafka + use 2 LS instances/clusters (1LS per required output) with different consumer-groups. The consumer groups uncouple the systems
Use redis publish-subscribe (type: channels) to push events. Drawback: if LS is down, one ES cluster might not get any data (potential data loss).
implement some kind of replication: Still couple the systems, but have filebeat send to LS and LS push to local ES + queue(file, S3, redis, kafka, whatever). Have second LS forward the logs in queue to second ES instance.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.