There's a terrific way to do this.
You should make a duplicate of the /etc/logstash
directory somewhere, or at least copy log4j2.properties
, logstash.yml
, and startup.options
to a new directory.
You should edit the configuration in logstash.yml
and startup.options
so that path.data
and path.logs
are separate from the others, so as not to cause collisions. I use a subdirectory of /var/log/logstash
. You need to define where path.config
is, which is your path to logstash pipeline configuration files (like /etc/logstash/conf.d
). If you plan on monitoring Logstash, you may want to manually set the http.port
and/or http.host
so you know which instance is which. If you do not set path.data
to be different, there are potential collisions from plugins stored data, but that will depend on configuration.
In startup.options
, you at need to define LS_SETTINGS_DIR
to point to where you put logstash.yml
and such, plus SERVICE_NAME
and SERVICE_DESCRIPTION
(I use logstash_in
for both options on one instance, and logstash_out
for both on my other instance). If LS_HOME
is the package dir of /usr/share/logstash
, then you can either hard-code or leave it blank.
Now, if you've made these edits to this directory, you can create another systemd instance (like the one you have) by running:
/usr/share/logstash/bin/system-install /path/to/settings/dir
You should then be able to run systemctl status ${SERVICE_NAME}
substituting whatever you used for SERVICE_NAME
and see the status.
Sorry for the rough instructions. Haven't made the blog post detailing this process yet.
UPDATE: This method can still be used, though its applications may be more limited. The suggested method now in 6.x is to use pipelines.yml
and have a single instance of Logstash run multiple, separate pipelines in parallel. With this approach, you don't need to have separate JVMs to have independent pipelines. Of course, if you want it that way, the above description still works.