Elasticsearch output plugin: frequent Connection Reset errors

Hello,

I often get this in Logstash logs (every 5-10 minutes):
{:timestamp=>"2016-03-08T10:53:34.880000+0100", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://xxx.xxx.xxx.xxx:9200/\"]', but Elasticsearch appears to be unreachable or down!", :client_config=>{:hosts=>["http://xxx.xxx.xxx.xxx:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>10, :request_timeout=>10, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection reset", :class=>"Manticore::SocketException", :level=>:error}

I'm using Logstash 2.2.2 and ES 2.2.

Is it due to HTTP pipelining (enabled in my ES config)? There is that http.pipelining.max_events setting which is described as: "The maximum number of events to be queued up in memory before a HTTP connection is closed, defaults to 10000." So I guess ES closes the HTTP pipeline after max events are received, which might be the source of the exception, right?

Most importantly, the log says "retry_on_failure=>false": does that mean data is lost when this error happens and no retries will be attempted?

Thanks,
MG