Multiple tables as input. Multiple pipelines?


(Justin Kim) #1

Hello,

I'm a complete n00b at Logstash, so please excuse my ignorance.

I have multple tables that I'd like to read using the JDBC input. They have different schemas, different purposes, etc.

Consuming these data as an end-user, they probably will end up with different visualisations, albeit on the same dashboard on Kibana.

Therefore, I'm thinking that I will put them in different indices in elasticsearch - my first question is, am I on the right track thinking this way?

If I do do that, although I'm sure I can configure it using one config file, I think each config file will be simpler and more maintainable if I split them into multiple pipelines...

Are there any best practices guide relating to this? What do you guys do in such cases?

Thanks,

Justin


(Walker) #2

Sounds like you are on the right track. I have no experience with the JDBC input but reading the documentation makes it seem pretty straight-forward. I'd suggest that for each table you ingest, it be given a unique index name and then in Kibana, create a unique Index Pattern.


(Magnus Bäck) #3

Therefore, I'm thinking that I will put them in different indices in elasticsearch - my first question is, am I on the right track thinking this way?

Yes. You may want to have different mappings for fields with a given name and then you have to store them in different indexes.

If I do do that, although I'm sure I can configure it using one config file, I think each config file will be simpler and more maintainable if I split them into multiple pipelines...

Yes, perhaps. Multiple pipelines is probably the better choice here (unless you need a very large number of them I suppose) but either will work fine.


(Justin Kim) #4

Hi Magnus, thanks for replying.

I have another question though -

I'm reading multiple different files (IIS log and SQL server log) using filebeat. They have generally the same problem as the JDBC input - they'll end up in different indices, contains different formats (and therefore filters)

Should I create multiple pipelines with multiple beats input on different ports? Or would it be better to have one beats input?

Thanks,

Justin


(Magnus Bäck) #5

It's either way really. Personally I wouldn't bother splitting them because I'd want them to share some filters, and eventually I'd want to pick up additional log types and then the port allocation becomes unnecessary overhead.


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.