Configuring logs and monitoring different logstash running on same AWS instance

Hi,

We are using systemctl for running 3 logstash processes, one for each environment : dev, qa and prod on AWS centos box. We are unable to figure out how to configure logging for these 3 different logstash processes as well as using the in-built monitoring api provided by logstash to monitor pipeline, jvm usage, etc. Below are the commands that we are using for running each logstash along with log config.

sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/dev --path.data /tmp/dev/ --path.logs /var/log/logstash-dev.log --log.level error

sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/qa --path.data /tmp/qa/ --path.logs /var/log/logstash-qa.log --log.level error

sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/prod --path.data /tmp/prod/ --path.logs /var/log/logstash-prod.log --log.level info

Additionally, whenever we recreate multiple indices from scratch in an environment, logstash crashes pretty quickly before reindexing all the data. We need to reindex manually for each index one by one and once new index is created logstash process for that environment is stable.

Please help us understand the correct way of handling such a scenario.

Thanks!

Isn't --path.logs supposed to point to a directory rather than a file?

We are unable to figure out how to configure logging for these 3 different logstash processes as well as using the in-built monitoring api provided by logstash to monitor pipeline, jvm usage, etc.

You need to use different logstash.yml files so you can run the monitoring API on different ports (and/or network interfaces).

Additionally, whenever we recreate multiple indices from scratch in an environment, logstash crashes pretty quickly before reindexing all the data.

Crashes how? What's the error message?

Isn't --path.logs supposed to point to a directory rather than a file?

As per the documentation Logging | Logstash Reference [8.11] | Elastic - You can specify the log file location using --path.logs setting. We would want to have all the good features that come with logstash using the log4j. Please give example of how this should be done considering the example that I gave you above.

You need to use different logstash.yml files so you can run the monitoring API on different ports (and/or network interfaces).

We are using systemctl to run 3 different logstash process, can you give an example of content of logstash.yml as well as naming convention to distinguish which process should refer to which logstash.yml file.

Crashes how? What's the error message?

Usually, a heap dump file is generated like java_pid5883.hprof. We do not see the logs as we are running these logstash process in background, without any logging in place. Using below:
ExecStart=/bin/sh -c 'sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/dev --path.data /tmp/dev/ --path.logs /var/log/logstash-dev.log --log.level error &'

As per the documentation Logging | Logstash Reference [8.11] | Elastic - You can specify the log file location using --path.logs setting.

Yes, and according to Running Logstash from the Command Line | Logstash Reference [8.11] | Elastic it should be a directory and not a file.

We are using systemctl to run 3 different logstash process, can you give an example of content of logstash.yml as well as naming convention to distinguish which process should refer to which logstash.yml file.

You can use the existing file as an example. Just adjust the HTTP monitoring settings. Or, use the same settings file for all instances but use the command line options for adjusting the HTTP monitoring. See the page above.

Hi Magnus, thanks for your reply. I followed this link and I was able to configure logging for the environments we are running logstash separately for.

Cheers!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.