Manticore::ResolutionFailure error in Logstash

Hi Community,

when I am Running the .conf file in logstash I am getting the error

"Address already in use - bind - Address already in use", So I killed the running process and restarted the logstash. once logstash up and running fine, I ran the .conf file, then I am getting the "Manticore::ResolutionFailure error"

How solve the "manticore::ResolutionFailure error". and I am unable to get data from filebeat.

my .conf file is
input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => ["http://10.***.***.***:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

thanks for the Anticipation.

Please supply the full error message; I'm sure there's more than just "Manticore::ResolutionFailure error".

1 Like

Please find the below errror message.

bin]# ./logstash -f sample.conf
Settings: Default filter workers: 2
http: unknown error {:class=>"Manticore::ResolutionFailure", :level=>:error}
Logstash startup completed
^CSIGINT received. Shutting down the pipeline. {:level=>:warn}
Logstash shutdown completed

{:timestamp=>"2016-02-05T16:21:36.776000+0530", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://https://localhost:9200/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :client_config=>{:hosts=>["http://https://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"https: Name or service not known", :error_class=>"Manticore::ResolutionFailure", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:37:in initialize'", "org/jruby/RubyProc.java:281:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:79:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:256:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.2-java/lib/manticore/response.rb:153:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:inperform_request'", "org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:innon_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/common.rb:160:insafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/common.rb:99:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/common.rb:84:inretrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/common.rb:56:in setup_buffer_and_handler'", "org/jruby/RubyProc.java:281:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:109:in flush_unsafe'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:93:ininterval_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:82:in spawn_interval_flusher'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:63:insynchronize'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:63:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:82:in spawn_interval_flusher'", "org/jruby/RubyKernel.java:1479:inloop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.4.1-java/lib/logstash/outputs/elasticsearch/buffer.rb:79:in `spawn_interval_flusher'"], :level=>:error}

This is the error i got inside logstash.log file

{:timestamp=>"2016-02-05T16:21:36.776000+0530", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://https://localhost:9200/"]', but an error occurred and it failed!

Don't include "https://" in the hosts option in your elasticsearch output.

1 Like

Thanks Magnus !!! Now all things working fine.