currently we are only using the main pipeline. Processing of different types of logfiles are separated by conditionals.
I have a few questions about using multiple pipelines:
- Can both pipelines use the same beats input or does each pipeline needs its own?
- How can I decide which pipeline my log line has to use? My only idea is using different inputs, but that would need infrastructure changes like port openings, etc.
- What are common usecases for using multiple pipelines. In my mind I have following:
- pipeline 1 is processing beats input, pipeline 2 is processing scheduled jdbc input. Therfore a blocking / long runnning jdbc query will not delay / block the beats pipeline
- If I have logs of production and dev stage (of a monitored application) in the same elastic stack, than I could set pipline prod to 6 CPU and pipeline dev to 2 CPU.
- are the above ideas correct? Any more ideas?