Input blocking when output is unavailable

I have a TCP input that receives messages, a few filters and a single Elasticsearch output. I'm running a small script that sends messages to the LS TCP port every X miliseconds.

After experimenting a bit, I have a few questions:

  1. Since Logstash persistent queues does not support TCP inputs, is there any mechanism (other than a message queue) that I can implement to 'guard' myself from long lasting ES downtime?

  2. I assumed that if Elasticsearch is down, messages will be stored in-memory, however this is not reflected when I curl http://localhost:9600/_node/stats/pipelines, the pipelines.main.events.in count stops climbing as soon as ES is down, is there another place I should look for to see the in-memory queue count?

  3. I've used the redis output in the past and it has a congestion_threshold parameter, is it possible to theoretically implement the same in the elasticsearch output? Does that even block the input, or just blocks the output?

Thanks!

bump