Logstash blocs backend application server

Server configuration:
Glassfish -> Log4j socket appender -> Logstash log4j input -> Logstash elasticsearch output

Everything works as expected but if elasticsearch goes down my application server become blocked. Nobody can work with Glassfish until elastic is started again. Moreover it is pretty difficult to find why application server stop working.
I see following messages in logstash log file:
{:timestamp=>"2016-08-04T10:48:07.517000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["https://efi02:9820"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :level=>:error}

Is there any way to ignore incoming messages from application server or redirect them to another output until elastic is up?

Why not put it into a file, or something, instead?

As you have noticed, TCP based inputs tend to not handle back pressure well and this can cause problems for the applications sending data. this is why we often recommend introducing some kind of buffering mechanism into the pipeline. This can be a simple as writing to a file as @warkolm suggests, as one can stop reading the file while processing is blocked and then catch up once the issue has been cleared. We also see various types of message queues used for this, e.g. Regis, RabbitMQ or Kafka, as they can buffer messages and allow the Logstash instance collecting data to continue uninterrupted.

+1 for queueing or buffering (file). That'll fix you up.

Thanks a lot for your answers. I understand that buffering to a file should help, but I was hoping that there is better solution without intermediate file. I my case it would be good enough just stop sending records to elastic. It was an option before max_retries in elastic output plugin, but it does not work anymore:(