Error while performing resurretion after restarting elasticsearch

Hello together,

I installed a single-node-cluster with the 5.0.0rc1 version of the ELK-Stack for some testing.

When restarting the elasticsearch-service logstash tries to reconnect:

[ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://~hidden~:~hidden~@127.0.0.1:9200][Manticore::SocketException] Broken pipe", :class=>"LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2016-10-26T11:32:32,134][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::Elasticsearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}

But then, when elasticsearch itself startet properly and i can access it by kibana or http, logstash throws another error:

[2016-10-26T11:33:15,097][WARN ][logstash.outputs.elasticsearch] Error while performing resurrection {:error_message=>"Got response code '403' contact Elasticsearch at URL 'http://~hidden~:~hidden~@127.0.0.1:9200/'", :class=>"LogStash::Outputs::Elasticsearch::HttpClient::Pool::BadResponseCodeError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.1.2-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:48:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.1.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:233:in perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.1.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:201:in resurrect_dead!'", "org/jruby/RubyHash.java:1342:in each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.1.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:196:in resurrect_dead!'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.1.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:189:in start_resurrectionist'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.1.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:117:in until_stopped'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.1.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:188:in start_resurrectionist'"]}
[2016-10-26T11:33:20,100][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x120024b6 URL:http://~hidden~:~hidden~@127.0.0.1:9200>, :healthcheck_path=>"/"}

This error is occuring all the time and logstash won't reconnect.
When trying to restart logstash itself the service won't stop and has to be force-killed. (Well, I think this could be proper behaviour to not lose any data)
After starting it again, everything works fine and the elasticsearch-outputs connect correctly.

Elasticsearch is protected by security. The logstash-elasticsearch-outputs look like this:

elasticsearch {
index => "test-index-%{+YYYY.MM.dd}"
user => "logstash"
password => "thepassword"
}

Anyone an idea what to do?

Additional information:

Upgraded the stack to the release of 5.0.0 today, still encountering the same problem.
The Logstash output won't reconnect.

You are probably using shield/x-pack for authentication.
You need to make sure that your Logstash user has sufficient privileges. Try this with curl, i.e.

curl -u logstash:thepassword http://127.0.0.1:9200

(as per the log output with the hidden pw entry).

Most likely you won't receive an ES status JSON, but an error.

To solve this, check your roles,yaml file for the logstash user and make sure an entry like:

cluster: [ 'monitor' ]

exists. Then retry access with above curl-command.

Best wishes,
Thorsten

1 Like

Hey Thorsten,

thanks. that solved my problem.

I had to add the "Monitor" cluster-privilege to my logstash-role.

hey Nick,
i have the same problem how i can use this solution ??

This solved the Problem:

Give your user which you use in logstash to send data into elasticsearch the "Monitor" Cluster Privilege.

thanks nick i found the incorrect password of elasticsearch
thanks for your response