currently we are only using the main pipeline. Processing of different types of logfiles are separated by conditionals.
I have a few questions about using multiple pipelines:
Can both pipelines use the same beats input or does each pipeline needs its own?
How can I decide which pipeline my log line has to use? My only idea is using different inputs, but that would need infrastructure changes like port openings, etc.
What are common usecases for using multiple pipelines. In my mind I have following:
pipeline 1 is processing beats input, pipeline 2 is processing scheduled jdbc input. Therfore a blocking / long runnning jdbc query will not delay / block the beats pipeline
If I have logs of production and dev stage (of a monitored application) in the same elastic stack, than I could set pipline prod to 6 CPU and pipeline dev to 2 CPU.
Can both pipelines use the same beats input or does each pipeline needs its own?
They need their own inputs. The whole point of the support for multiple pipelines is that they don't share any inputs, outputs, or filters.
pipeline 1 is processing beats input, pipeline 2 is processing scheduled jdbc input. Therfore a blocking / long runnning jdbc query will not delay / block the beats pipeline
A long running JDBC query won't block the beats input since they run in separate threads anyway, but they could compete for the same filters and outputs so that could be a reason to split into multiple pipelines.
If I have logs of production and dev stage (of a monitored application) in the same elastic stack, than I could set pipline prod to 6 CPU and pipeline dev to 2 CPU.
Don't know if I get this correctly.
What do you mean by dont't share any outputs or filters.
Same output /filter is identified by it's id?
I mean, can I push events from two pipelines to the same ES? Of course each pipeline config has it's own output configuration (but both configs are identically).
When I have two pipelines which have a copy of the same configuration set, and I've given a custom id for my filter plugins (eg. http_grok). Will the filter plugins of both pipelines be independent? Hope it is clear what I am asking for
I mean, can I push events from two pipelines to the same ES?
Of course you can.
When I have two pipelines which have a copy of the same configuration set, and I've given a custom id for my filter plugins (eg. http_grok). Will the filter plugins of both pipelines be independent
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.