How to specify pipelines.yml file path

I'm working on Logstash setup in a legacy system with no owners in the organisation.
Using ps, i found the process executing the logstash service:

/usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/var/lib/logstash -Xmx1g -Xss2048k -Djffi.boot.library.path=/opt/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/var/lib/logstash -XX:HeapDumpPath=/opt/logstash/heapdump.hprof -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log

I know the config file being used in the logstash service -> /etc/logstash/conf.d/ls.conf.

But I want to modify the service to use forked pipelines. But what I assume is the home dir ... /opt/logstash doesn't contain any logstash.yml or pipelines.yml.
How do I restart the service to use the pipelines.yml I've written?

An update: /opt/logstash/bin/logstash --version says v2.2.4

It is not possible, version 2.2.4 has no support to multiple pipelines (which is configured through pipelines.yml).

The multiple pipelines was introduced only in version 6.0.

The only way to use multiple pipelines is to upgrade to a newer version, which you should do as soon as possible as 2.2.X reached EOL in 2017.

Thanks for responding. I'm afraid of all the breaking changes from upgrading to +5 versions.

Also, how to find the .sincedb for 2.2.4?

How to plan for the upgrade? I'm reading a ../.*txt input using filebeat + logstash.

Working with the Elastic stack and being afraid of breaking changes is not a good match :laughing: Elastic have no fear of introducing breaking changes when they think it provides enough benefit. That said, I realize you have been given a task to complete.

What are you trying to do with multiple pipelines? The distributor and collector patterns can be implemented in a single pipeline using if-else blocks in the filter and/or output sections. A forked-path pattern can be implemented using a clone filter plus an if-else based on the type added by the clone filter.

The output-isolator pattern cannot be implemented in old versions.

I'm working with a single input source (file path => "/abc/.*txt") but want to use two different filter plugins for 2 different outputs - in the future I'm gonna downgrade the RabbitMQ output & persist the Kafka output. So I want to implement a forked path.

I should upgrade the system since we're doing an architecture overhaul of the downstream application anyway. Just that I don't know what to tick off before upgrading. I think I should read the sincedb and provide the path in the new installation, but I can't find a 'sincedb' anywhere in the system!

Update: Filebeat is actually reading the input & using 0.0.0.0:5044 as Logstash output. And it's v7.8.

If I stop the Logstash service, will I lose events from Filebeat. Or will the last read checkpoint be persisted and read by the new Logstash?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.