Logstash stop sending the logs to the ElasticSearch

Version: logstash-2.1.1

After elasticsearch was out for a while and the logstash generates error like this:
{:timestamp=>"2015-12-25T17:32:47.451000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://elasticsearch:9200/"]', but Elasticsearch appears to be unreachable or down!", :client_config=>{:hosts=>["http://elasticseach:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused: connect", :class=>"Manticore::SocketException", :level=>:error}

And nothing happens afterwards. Such a basic scenario. Very disappointing :frowning:

1 Like

So... curl http://elasticseach:9200/worked at 17:32 yet Logstash still wasn't able to connect?

Yep, we restarted the elasticsearch machine and it was up and running afterwards. In the log file of the logstash I have seen several attempts to write logs to the ElasticSearch and then silence.
Actually we faced a couple of different situations:

  1. ElasticSearch machine restart
  2. ElasticSearch service restart

Same problem here. Actually the Elasticsearch Instances are running both since 3 days, but logstash stops working. On different maschines at different time. We had on ES Cluster for 3 environments
In the acutal setup the application which logs are captured is not producing logs all the time. As there are dev and test servers it is possible that there are no logs for hours. Yet I am not sure if this has a relation.

Logstash is able to reconnect after it is killed (-9) and restarted.

Elasticsearch Version: 2.1.1
Logstash Version: logstash-2.1.1-1.noarch (RPM)

Hello,

I think I have the same problem. I use filebeat as log provider ( this is installed on windows server), then logstash receive these logs and make them as json and save them to elasticsearch. after few days, 2-7, it stops with this error message (logging is in debug). ELK is installed into redhat server. Every time after I start logstash everything works well. Any idea about this issue?

{:timestamp=>"2016-02-09T11:37:11.055000+0200", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x5dd2bac @operations_mutex=#<Mutex:0xb1ac567>, @max_size=500, @operations_lock=#<Java::JavaUtilConcurrentLocks::ReentrantLock:0xf2540dc>, @submit_proc=#<Proc:0x26cb0dff@/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/common.rb:54>, @logger=#<Cabin::Channel:0x7b27c5bc @metrics=#<Cabin::Metrics:0x5e2159a4 @metrics_lock=#<Mutex:0x2a083013>, @metrics={}, @channel=#<Cabin::Channel:0x7b27c5bc ...>>, @subscriber_lock=#<Mutex:0x54d27458>, @level=:debug, @subscribers={12450=>#<Cabin::Outputs::IO:0x4525d6a5 @io=#<File:/var/log/logstash/logstash.log>, @lock=#<Mutex:0x2e46eeba>>}, @data={}>, @last_flush=2016-02-09 11:37:09 +0200, @flush_interval=1, @stopping=#<Concurrent::AtomicBoolean:0x69712f9e>, @buffer=[[\"index\", {:_id=>nil, :_index=>\"logstash-2016.02.09\", :_type=>\"log\", :_routing=>nil}, #<LogStash::Event:0x1acbac8d @metadata_accessors=#<LogStash::Util::Accessors:0x629ca0da @store={\"beat\"=>\"st\", \"type\"=>\"log\"}, @lut={\"[beat]\"=>[{\"beat\"=>\"st\", \"type\"=>\"log\"}, \"beat\"]}>, @cancelled=false, @data={\"LogLevel\"=>\"INFO\", \"StartTime\"=>\"2016-02-09 11:33:59.2459409\", \"ExecutionTime\"=>0, \"CallingMethod\"=>\"InvokeMethod\", \"Correlation\"=>{\"CorrelationId\"=>\"10bd8e73-5975-402f-a0b8-bd5b1fec08fb\", \"SequenceNo\"=>1}, \"Info\"=>{\"Id\"=>1, \"Name\"=>\"GetCurrentInstance\", \"AppGuid\"=>\"93d25489-d02b-4ecc-89e9-6f97e57c3d0c\"}, \"Inputs\"=>\"[]\", \"@version\"=>\"1\", \"@timestamp\"=>\"2016-02-09T09:33:59.459Z\", \"beat\"=>{\"hostname\"=>\"BUHRAPPST01\", \"name\"=>\"BUHRAPPST01\"}, \"count\"=>1, \"fields\"=>nil, \"input_type\"=>\"log\", \"offset\"=>1734196, \"source\"=>\"D:\\\\Logs\\\\Plat_20160209-11.log\", \"type\"=>\"log\"}, @metadata={\"beat\"=>\"st\", \"type\"=>\"log\"}, @accessors=#<LogStash::Util::Accessors:0x395c8202 @store={\"LogLevel\"=>\"INFO\", ...

part 2 of error:
{:timestamp=>"2016-02-09T11:37:11.136000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["https://10.98.25.182:9200/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :client_config=>{:hosts=>["https://10.98.25.182:9200/"], :ssl=>{:ca_file=>"/etc/logstash/cacert.pem", :verify=>false}, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{:ca_file=>"/etc/logstash/cacert.pem", :verify=>false}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :headers=>{"Authorization"=>"Basic bG9nc3Rhc2g6bG9nc3Rhc2gxMjMh"}, :level=>:debug, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"SSL peer shut down incorrectly", :error_class=>"Manticore::ClientProtocolException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:70:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:245:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:148:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.14/lib/elasticsearch/transport/transport/http/manticore.rb:71:inperform_request'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.14/lib/elasticsearch/transport/transport/base.rb:191:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.14/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.14/lib/elasticsearch/transport/client.rb:119:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.14/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:57:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/common.rb:140:in safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/common.rb:83:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/common.rb:69:in retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/common.rb:55:insetup_buffer_and_handler'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/buffer.rb:109:inflush_unsafe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/buffer.rb:93:in interval_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/buffer.rb:82:inspawn_interval_flusher'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/buffer.rb:63:in synchronize'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/buffer.rb:63:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/buffer.rb:82:inspawn_interval_flusher'", "org/jruby/RubyKernel.java:1479:in loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/buffer.rb:79:inspawn_interval_flusher'"], :level=>:error, :file=>"logstash/outputs/elasticsearch/common.rb", :line=>"159", :method=>"safe_bulk"}
{:timestamp=>"2016-02-09T11:37:11.142000+0200", :message=>"Failed actions for last bad bulk request!", :actions=>[["index", {:_id=>nil, :_index=>"logstash-2016.02.09", :_type=>"log", :_routing=>nil}, #<LogStash::Event:0x1acbac8d @metadata_accessors=#<LogStash::Util::Accessors:0x629ca0da @store={"beat"=>"st", "type"=>"log"}, @lut={"[beat]"=>[{"beat"=>"st", "type"=>"log"}, "beat"]}>, @cancelled=false, @data={"LogLevel"=>"INFO", ...

We already faced out, that the connection is closed on TCP level by the firewall after a long time of inactivity.
But we still have no idea why the connection is not re-established. On a vagrant test linux box I tryed to reproduce it, but here the connection was opend again, after the firewall cut it down.

Hi,

I also ran into this issue after several hours running of logstash. My Logstash is 2.3.4, and Elasticsearch is 2.3.5. Everything works well again after restarting logstash.

Any suggestions on that? Thanks a lot!

I think im experiencing this. If I restart ES, Logstash throws

:message=>"Got error to send bulk of actions: Connection refused", :level=>:error}

But when ES comes back, logstash does not resume.

Hi
I've experience similar behaviour. My findings was documented here:


Hope it helps

//Rickard

Thanks Rickard,

I'm using LS1.5, so I still have to deal with some of this stuff, I added:

max_retries => "60" # default 3
retry_max_interval => "10"

I guess after a few retries, it gives up. This might give it enough time for ES to restart and continue going.