@ pierhugues
Thanks for your patient and detailed replying!
First of all, Filebeat is a good software , thanks for beats team's awesome work!
From the view of user, I just hope it become better.
On my opinion, I think Filebeat should have two import work to do: the first, reading/collecting logs data; the second, routing data to specified output. Both the two parts construct the final goal of filebeat: a high performance logs/data shipper.
At present, filebeat provide very good data collecting feature, but he can just chose one partner(output) to work for a whole life, even he has the ability to work together with many partners(output).
I thought about your example problems, I will try to give some suggestions.
*** Are you sending each event to multiples output?**
It should be decided by the configure, at the beginning of the running of beat, it will know how many data routes would be established. It's not a problem. For now, obviously only one route.
*** If a single output is down, what should we do, halt everything or send to responsible server?**
I think every route is a group, not many to many relationships. So what you need to do is just copy what you have done in the current single route to others parallel routes, all the routes are parallel, there are no cross/intersection.
*** Can conditionals be used to send some events to a single output, what are the delivery guarantee of the outputs?**
Same, just do what you have done.
I noticed you said this words "The files on disk acts as a queuing system.", maybe do you need a queue for every route?
I am not familiar with concrete tech stack used by filebeat, I mainly do some business project with PHP, so maybe my understanding is very ridiculous, but i really thought about it seriously .
And now filebeat can work with elasticsearch ingest node, now that it provide this feature, people will try to use it, but it's not as powerful as logstash to handle kinds of formats log data.
So maybe people(for example, me) want to some simple logs directly send to elasticsearch and some complex log send to logstash. If all logs have to be sent to elasticsearch ingest node, it maybe can't satisfy the business requirement; but if all logs have to be send to logstash, why filebeat provide this feature to send logs directly to elasticsearh? It's a very confused problem, or it actually should not be a problem, it just needs a routing system.
For the configure file, an input configure and an output configure should be in a same group or section.