Multiple pipelines in logstash

unable to start multiple pipelines in logstash 5.6.2.

it's similar to Logstash Multiple Pipelines Doesn't work, but I am having the file named correctly as pipelines.yml.

Here's the message that goes in a loop when I start logstash:

[2018-03-29T16:43:24,357][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-03-29T16:43:24,361][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-03-29T16:43:52,696][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-03-29T16:43:52,700][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-03-29T16:44:06,877][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-03-29T16:44:06,881][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-03-29T16:44:20,335][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-03-29T16:44:20,340][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-03-29T16:44:33,683][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-03-29T16:44:33,687][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-03-29T16:44:47,454][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-03-29T16:44:47,458][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}

Here are my config:

$ grep -v -E "#|^$" /etc/logstash/logstash.yml
 pipeline.output.workers: 2
 queue.type: memory
 queue.page_capacity: 1gb
 path.logs: /var/log/logstash

 $ grep -v -E "#|^$" /etc/logstash/pipelines.yml
- pipeline.id: primary
  path.data: /var/lib/logstash
  path.config: /etc/logstash/conf.d/logstash_es.conf
  dead_letter_queue.enable: true
  dead_letter_queue.max_bytes: 4g
- pipeline.id: dlq
  path.data: /var/lib/logstash
  path.config: /etc/logstash/conf.d/logstash_es_dlq.conf

ps aux|grep logstash logstash 2807 188 5.3 4521712 437320 ? SNsl 16:48 0:24 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx2g -Xms2g -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

Am I missing anything? Is the pipleines.yml feature not supported in logstash version 5.6.2? thanks

Doesn't look like you have debug logging enabled for Logstash. Add log.level: debug to your logstash.yml, it may feed you more valuable diagnostic information. What's the command you are using to start Logstash?

I am able to solve this problem with an upgrade to logstash 6.2.3. Now the dead letter queue directory is being filled up with single byte files with empty content which breaks my secondary DLQ pipeline.
Under which circumstances DLQ is filled with a single byte file with empty content ? Any thoughts ?

I am able to pass this problem by passing the correct DLQ directory. it looks like the default directory for DLQ is changed to /var/lib/logstash/queue/main in version 6.2.3 instead of /var/lib/logstash/dead_letter_queue/main. initially I set the **dead_letter_queue.max_bytes ** to 10g and it started thowing the below error:

cannot write event to DLQ: reached maxQueueSize

Then I cleaned up the DLQ manually using

rm -f

Then restarted the logstash with the correct path to DLQ directory in my secondary pipeline which reads from the DLQ directory.
But even now I get the same error:

cannot write event to DLQ: reached maxQueueSize

although there's no file present under /var/lib/logstash/queue/main

Is there a way I can hard reset the DLQ cache/memory and start from the scratch? looks like it's similar to https://github.com/elastic/logstash/issues/8794.

Also, in logstash 6.2.3, is there a way we can make dead_letter_queue.max_bytes to unlimited(based on disk size) rather than having a fixed limit /default 1g?

current configurations:

grep -v -E "#|^$" /tmp/logstash.yml
pipeline.output.workers: 2
path.logs: /var/log/logstash
grep -v -E "#|^$" /tmp/pipelines.yml
- pipeline.id: main
  path.config: "/etc/logstash/conf.d/logstash_es.conf"
  dead_letter_queue.enable: true
 dead_letter_queue.max_bytes: 10g
- pipeline.id: dlq
  path.config: "/etc/logstash/conf.d/logstash_es_dlq.conf"

and the input in the dlq pipeline config:

input {
dead_letter_queue {
path => "/var/lib/logstash/queue"
commit_offsets => true
}
}

Please let me know your views.thanks for the help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.