I have a multi-node Logstash tier where the deployment process distributes identical configuration between the nodes. This is fine for where all the logstash config is passive and content is fed from syslog and beats sources.
The question here is what's the approach for using a jdbc input source. We have servers that have some log information in SQL Server tables and its possible to define a jdbc input to retrieve those. One presumes that if this input config is distributed to all Logstash nodes then they'll each individually poll and submit resulting in duplication. If I distribute the input config to just one node, this isn't fault tolerant.
What's the usual pattern for this type of multi-node Logstash set up where the configuration is actively polling?