Multiple Pipeline - Single or multiple events?

I could configure multiple pipelines using logstash setup. But have a generic query on practice.

  1. Source from redis cache, but cache key is different for each type of event.
  2. Pipelines filters are different for each type of event
  3. Have two choices a. have multiple pipelines providing more control over each event type, b. have single pipeline with for each log type. Handle using if cluse in filter/output.

The document blog provides the conclusion "pipelines allows you to get more out of a single Logstash instance, giving you the flexibility to process separate event flows without having to work around the constraint of a single pipeline. "

Given that the input is same endpoint but different keys, should it be considered as separate event flow and follow multiple pipeline or same input and use if/else with main pipeline. The number of events by key do differ viz. Events/Sec.

Any thoughts/inputs is appreciated.

NOTE: Why multiple keys in source - is by design to differentiate between inputs and long term changes

You haven't given much detail, but given what you wrote, personally I would use multiple pipelines.

Ya, it's a bit difficult to understand what you are trying to do, do you have some examples? I'd say, if it is structurally different (syslog vs xml for example), you'd want to use multipipelines. If the data is structurally the same, logic statements in a single pipeline should suffice.

Hi badger/walker,

Thanks for the response and sorry for late reply.

The incoming logs are structurally different, and originate from systemlogs, winlogs, file based logs etc. sending to redis.

Karthik R

You COULD do it all in a single pipeline but I think it would get pretty complex. If you had a separate input for each, you could apply a tag on that data at the input stage and then use an if statement to perform actions based on the tag. Personally, I'd just separate them out into different pipes.

Hi Walker,


The input key is different for each type of log. Vis sys log auth/audit etc. will have different keys, each file based log will have different keys.

My only concern with single pipeline is that on how to control throttle by key. For Ex. System related logs (metrics, system) etc. will send more info than a file based logs as it is custom. Multiple pipeline provides me the opportunity to control more. Moreover, I can avoid IF ELSE block as each pipeline is individual.

Karthik R

I'm not sure what you mean by different keys. I was thinking something along the lines of the below. I think we are arguing semantics at this point though because we both have agreed, multiple pipelines would be better, lol.

input {
 beats {
    tags => [ "beats" ]
syslog {
  tags => [ "syslog" ]
file {
  tags => [ "XML' ]
filter {
  if [tag] == "beats" {
    filters for beats in here
  if [tag] == "syslog" {
    filters for syslog here
  if [tag] == "XML" {
  filters for XML files here

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.