Best approach to 50+ input sources

Hello

I'd appreciate your thoughts on how to approcah the following use case:

  • I have to configure a Logstash pipeline to index a table in SQL Server
  • The thing is, although the table structure is the same for every client, the databases are not: each client has its own database in SQL Server
  • The JDBC input plugin requires a connection string, username and password to make the connection, those would be unique for every instance (client)
  • The statement would be the same for all of them

So, how would be the best approach to this? Will Logstash scalate well?

Two approaches I can think of are:

  1. 50 different pipelines in /etc/logstash/pipelines.yml, 50 different pipeline configs, trying the most to get values from env variables (maybe only the connection credentials would differ)
  2. 1 pipeline in /etc/logstash/pipelines.yml, but 50 inputs in the pipeline config

Do you have any other suggestion?

Thank you very much

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.