Yes, but I must have two pipelines using data from the same network port (beats).
Normally I also have several different pipelines in pipelines.yml.
No problem and it works well but I always have one input to one pipeline.
I need one input to two pipelines for some reasons.
This is not a standard configuration.
However I want to have two different pipelines performing completely different parsing.
It should also be assumed that I don't have the ability to add another output to filebeat and I want a very simple and fast solution on logstash.
I guess we can do this on udp/tcp forwarding?
I'm sure you would put the config for each pipeline in their own files as the lines would get hard to read otherwise but just to illustrate the main config.
Yes.
I tested it.
But I have very extensive parsers described in the separated files.
One pipeline is one quite long file with input/filter/output (filters can be quite a lot).
Such configuration probably works only if everything is in one config/pipelines.yml file.
But when I was adding to each pipeline a record of type:
The only thing I can think of off the top of my head is that each input can only be used once (as far as I know). So any specific port or virtual address can only be used once, otherwise I would expect there to be resource conflicts between pipelines. I would expect to see that in Logstash logs though.
Could you show your file and folder structure and the config/pipelines.yml you use.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.