Logstash stopped due to an error in one of the pipeline configurations

We have 10 conf files running in pipeline. The problem is, if any one of the pipeline configurations has some errors then the entire Logstash instance stops though the other pipeline configurations has no errors. How do we make sure that the errors in one conf file does not affect the other files in the pipeline? Version of Logstash 7.6.

Are all 10 conf files running in a single pipeline or does each conf file have it's own pipeline defined in logstash.yml?

yes, all config files are running in single pipeline.

Then try and split up the configurations into different pipelines. If you need to then you can also define pipeline-pipeline communication to pass various inputs around.

sorry, it is different pipelines only.
Below is the pipeline.yml configuration

 -
   path.config: /data/logstash-7.6.2/config/versa/us33-uptime.conf
   pipeline.id: us33-uptime
   queue.type: persisted
   pipeline.workers: 1

 -
  path.config: /data/logstash-7.6.2/config/versa/us33-system.conf
  pipeline.id: us33-system
  queue.type: persisted
  pipeline.workers: 1

 -
   path.config: /data/logstash-7.6.2/config/versa/us33-service-status.conf
   pipeline.id: us33-service-status
   queue.type: persisted
   pipeline.workers: 1

 -
   path.config: /data/logstash-7.6.2/config/versa/R10-kpi-monitoring.conf
   pipeline.id: R10-kpi-monitoring
   queue.type: persisted
   pipeline.workers: 1

 -
   path.config: /data/logstash-7.6.2/config/versa/R10-kpi-monitoring-alarm.conf
   pipeline.id: R10-kpi-monitoring-alarm
   queue.type: persisted
   pipeline.workers: 1

could you please remove pipeline.workers attributes and restart the logstash

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.