Hi,
I have written a bash script that does the following:
for each file in the input
folder, feed the file to logstash for processing and send that to elasticsearch.
Once the file has been processed it moves it to the processed folder.
#! /bin/bash
if [[ $# -lt 2 ]] ; then
echo "Usage: $0 <PATH_TO_INPUT_DIRECTORY> <PATH_TO_PROCESSED_DIRECTORY>"
exit 0
fi
INPUT_PATH=$1
OUTPUT_PATH=$2
for filename in $INPUT_PATH/*.log; do
bin/logstash -f config/logstash.conf < $filename
mv $filename $OUTPUT_PATH
done
The problem arises when the Elasticseach instance is down and logstash gets into an infinite reconnection loop. Is there any way to make logstash shut down when it fails to reach the elasticseach instance?
[WARN ] 2019-11-19 09:45:42.930 [Ruby-0-Thread-4: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://xxxxxx:xxxxxx@xxx.xxx.xxx.xxx:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://xxxxxx:xxxxxx@xxx.xxx.xxx.xxx:9200/][Manticore::ConnectTimeout] connect timed out"}
I must use this method as it is the only one that allows me to run logstash on demand using cron. If the ES instance is up, the script runs just fine.
Thanks in advance,
kb_z