I have a TCP input that receives messages, a few filters and a single Elasticsearch output. I'm running a small script that sends messages to the LS TCP port every X miliseconds.
After experimenting a bit, I have a few questions:
-
Since Logstash persistent queues does not support TCP inputs, is there any mechanism (other than a message queue) that I can implement to 'guard' myself from long lasting ES downtime?
-
I assumed that if Elasticsearch is down, messages will be stored in-memory, however this is not reflected when I curl
http://localhost:9600/_node/stats/pipelines
, thepipelines.main.events.in
count stops climbing as soon as ES is down, is there another place I should look for to see the in-memory queue count? -
I've used the redis output in the past and it has a
congestion_threshold
parameter, is it possible to theoretically implement the same in the elasticsearch output? Does that even block the input, or just blocks the output?
Thanks!