[ERROR][logstash.agent Failed to execute action {:id=>:main...}

Hello Everyone, i have been experiencing a problem with starting the pipeline ever since turning on SSL connection for Elasticsearch, Kibana and Logstash. My configuration is below:

input {

beats {

port => 6969

}

}

output {

elasticsearch {

hosts => ["https://node_name:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
ilm_enabled => "true"
user => "username"
password => "password"
ssl => "true"
cacert => "/etc/logstash/certs/ca.pem"

}
}

Now when checking the configuration using
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
The configuration comes back with as OK
However when starting logstash and looking at the log the following is displayed

[2019-09-19T15:11:30,031][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: P
ipelineAction::Create, action_result: false", :backtrace=>nil}
[2019-09-19T15:11:30,499][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-09-19T15:11:35,424][INFO ][logstash.runner ] Logstash shut down.
[2019-09-19T15:12:09,581][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-09-19T15:12:13,623][INFO ][org.reflections.Reflections] Reflections took 157 ms to scan 1 urls, producing 19 keys and 39 values
[2019-09-19T15:12:17,116][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[https://username:xxxxxx@server_name:9200/, https://username:xxxxxx@server_name:9200/]}}
[2019-09-19T15:12:18,240][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://username:xxxxxx@server_name:9200/"}
[2019-09-19T15:12:18,347][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-19T15:12:18,353][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2019-09-19T15:12:18,465][ERROR][logstash.javapipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintex
t connection?>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in block in initialize'", "/usr/share/logstash/vendor/bund le/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/o
utputs/elasticsearch/http_client/manticore_adapter.rb:74:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/out puts/elasticsearch/http_client/pool.rb:291:in perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/el
asticsearch/http_client/pool.rb:245:in block in healthcheck!'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-1
0.1.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241:in healthcheck!'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/lo gstash/outputs/elasticsearch/http_client/pool.rb:341:in update_urls'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/ela
sticsearch/http_client/pool.rb:71:in start'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client.rb :302:in build_pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:64:in initialize'", "/ usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:103:in create_http_client'", "/usr/shar
e/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:99:in build'", "/usr/share/logstash/vendor/bund le/jruby/2.5.0/gems/logstash-output-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch.rb:238:in build_client'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-outpu
t-elasticsearch-10.1.0-java/lib/logstash/outputs/elasticsearch/common.rb:25:in register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:106:in register'", "org/logstash/config/i
r/compiler/AbstractOutputDelegatorExt.java:48:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:192:in block in register_plugins'", "org/jruby/RubyArray.java:
1792:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:191:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:462:in maybe_s etup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:204:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:146:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:105:in block in start'"], :thread=>"#<Thread:0x45b1eae7 run>"}
[2019-09-19T15:12:18,500][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: P
ipelineAction::Create, action_result: false", :backtrace=>nil}
[2019-09-19T15:12:19,054][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-09-19T15:12:23,817][INFO ][logstash.runner ] Logstash shut down.

I know that the certificates are valid as the same one is used for Kibana with full vaildation. Any info would be appreciated.

This can be marked as resolved, i managed to get logstash to authenticate by removing all the ES hosts besides node01 (which is the primary master node in the cluster) and starting it, after which i just placed all the old nodes back into logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.