Syslog\UDP\TCP as inputs and files as output

Hello to everyone!
First of all, I'm an infant in using logstash and just briefly read some parts of logstash docs.
So do not blame me for stupid questions, please! =)
Now, let's look into my question.

Preface:
My future scenario to use logstah is to transfrom udp\tcp\syslog messages into files and then read these files using another product.
In my company, we have various network and hardware devices that send syslog data.
We considered separating devices into categories and using different ports to send syslog data.
For example, we have network switches using port 1101 and routers using port 1102.

What I want to achieve:
I want to generate a file per one particular syslog\udp\tcp port. It helps me to categorize data afterward.
For example, data comes via port '1101' stores in file 'swicthes.log', data comes via port '1102' stores into 'routers.log'.

What I have in my head now:
After I read docs and a couple of posts from this forum, it seems for me that I have two different approaches to resolving my task.
If I have others, pls, tell me; I will appreciate it ;).

  1. Conditional output

I have only one pipilene.conf with something that looks like this:
default-pipeline.conf

input {
    syslog {
        port => 1101
    }
    syslog {
        port => 1102
    }
}

output {
if [syslog][port][1101]
    file {
        path => "/path/switches.log"
    }
if [syslog][port][1102]
    file {
        path => "/path/routers.log"
    }
}
  1. One pipeline for each port

Instead of using conditions, I use one pipiline for earch port:
pipiline.conf

- pipeline.id: swithes
  path.config: ".../switches.conf"
- pipeline.id: routers
  path.config: ".../routers.conf"

switches.conf (for example)

input {
    syslog {
        port => 1101
    }
}

output {
    file {
        path => "/path/switches.log"
    }
}

Hello and welcome,

It is no clear what exactly is your question, can you provide more context?

You can use both ways, but separating the pipelines into different files and using pipelines.yml to define the pipeline would be a better approach.

As Leandro said, depend on data and transformations, maybe pipelines are a slightly better.
If you like, you can add type ="switch" or tags in the input which will replace [syslog][port][1101] in the output and provide simpler way to filter data on the end location. Again, this is an idea, your approach is also fine.

Thank you for the idea. I will read about tags.

Thank you for your response!
In general, my question was about approaches that I found.
I just need to check that I think in the right way =)

If immerse deeper, now I have litle bit less that 40 different types of devices. If it important, the amount of devices is near 400.
So, I will secect from what to create: corresponding amount of pipelines or conditionals.