This question actually applies to not only logstash but actually to all components in ELK stack.
Due to some constraints I have, I'm unable to run the installation process with plugin-install command for logstash et al on each of the individual servers I have so that each server has a copy of the component plus some plugins.
These servers though have a shared mount, would it be ok if I install logstash et al with plugins onto this shared mount, then spin up logstash et al on each server from the installation folder on this mount?
Besides configuring logstash et al to write logs into a server specific location, anything else I need to worry about when spinning up logstash off a shared installation location?
This might work. Follow these some or all of these instructions. The instructions are how to run multiple Debian services as separate LS instances from one installation. You will not need to do the last bit i.e. from plus SERVICE_NAME and SERVICE_DESCRIPTION onward.
Remember to use local disk for the LS data and log folders. Config can be on the shared mount but keep separate.
Also, check that bin/logstash.lib.sh is detecting the shared mount paths in the local context correctly.
Good luck - to my knowledge this has never been tried.
OTOH - why not use Ansible, Chef or Puppet to install a preloaded copy of LS?
We do use both ansible + puppet for setting up our infra. And I want to have local copies of LS on each server but bureaucracy makes simple things become out of reach.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.