I am facing this isssue while connecting elasticsearch with logstash over certificate .
I have mentioned the logstash.yml path in startup.options file is LS_HOME and LS_SETTINGS_DIR . Still it gives me the following error :
Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [https://username:xxxxxx@xyz.uat.abc.com:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"} [2019-10-23T12:37:35,525][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster. [2019-10-23T12:38:05,231][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://username:xxxxx@xyz.uat.abc.com:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://username:xxxxxx@crisp.uat.viacom18.com:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
Do i need a license for this ? I am able to connect elasticsearch and kibana over ssl and ca certificate but **facing issues with logstash and elasticsearch.
Please don't post unformatted code, logs, or configuration as it's very hard to read.
Instead, paste the text and format it with </> icon or pairs of triple backticks (```), and check the preview window to make sure it's properly formatted before posting it. This makes it more likely that your question will receive a useful answer.
It would be great if you could update your post to solve this.
Based on the error, this looks like an SSL certificate issue. Have you tried using openssl to verify that you can connect successfully to Elasticsearch with the certificate in question? It would be something like:
Logstash output plugin has no way of disabling hostname verification, so you need to make sure that
The hostname that you use in the output plugin configuration (xyz.uat.abc.com ) is included as a SAN in the certificate that elasticsearch uses for TLS on the http layer
The CA certificate that you reference with INDUS1-CA.crt is the actual CA certificate that has signed the certificate that elasticsearch uses for TLS on the http layer, by running the command @Mike_Place shared above
I am running elk instances using docker , i tried running this command inside the container, but it gives me an error saying openssl : command not found.
@ikakavas , certificates are same both in elasticsearch and logstash config. I did not understand your first statement. no way of disabling hostname verification.
Tried configuring logstash.yml in different ways , but everytime I get the same error. Do i need to change any configurations in elasticsearch ? But I am a little skeptic as I said elasticseach and kibana are working perfectly.
You can run the command outside the container, from anywhere that your Elasticsearch instance is reachable.
certificates are same both in elasticsearch and logstash config.
Not sure what you mean with this, maybe you can share your elasticsearch configuration too.
Tried configuring logstash.yml in different ways , but everytime I get the same error.
I can understand frustration, but this doesn't help us to help you much as we can't know what you tried and what didn't work or why.
Hostname verification means that when the client connects to a server over SSL, it verifies that the hostname it connects to is actually included in the SSL Certificate the server presents, in a section that is called Subject Alternate Names. Kibana can be configured to not check this, but logstash output plugin can't, this is what my statement meant to convey.
If you can share the configuration of elasticsearch and the configuration of kibana that works, then we can probably help you better understand what your issue is and how to overcome it.
Copying the certificate file to a host that does have the openssl binary installed and which can connect to the ES cluster in question would also work.
@Mike_Place Yes did that , received an error saying :
CONNECTED(00000003)
depth=0 CN = *.uat.abc.com
verify error:num=20:unable to get local issuer certificate
verify return:1
....
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
[logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://username:xxxx@xyz.uat.abc.com:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://username:xxxx@xyz.uat.abc.com:9200/][Manticore::ClientProtocolException] PKIX path validation failed: java.security.cert.CertPathValidatorException: Path does not chain with any of the trust anchors"}
with following configuration in my logstash.yml :
output {
stdout { codec => "json" }
if [name] == "xyz-api" and [level] == 50 {
elasticsearch {
index => "xyz-api-error-logs"
cacert => '/usr/share/logstash/config/certificates/INDUS1-CA.crt'
user => username
password => password
hosts => ["https://xyz.uat.abc.com:9200"]
}
}
}
but when I add following lines to my logstash.yml :
[ERROR][logstash.agent ] Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, => at line 31, column 17
(byte 890) after output {\n\nstdout { codec => \"json\" }\n \n
if [name] == \"xyz-api\" and [level] == 50 {\n elasticsearch {\n index => \"xyz-api-error-logs\"\n ssl_certificate_verification => true\n
cacert => '/usr/share/logstash/config/certificates/INDUS1-CA.crt'\n# sniffing => false\n user => username\n password => password\n hosts => [\"https://xyz.uat.abc.com:9200\"]\n
ssl", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:24:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
while shutting down it displays that it could find the es instance
[INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://username:xxxxx@xyz.uat.abc.com:9200/]}}
[WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://username:xxxxx@xyz.uat.abc.com:9200/"}
[INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://xyz.uat.abc.com:9200"]}
[INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x62c9f33e run>"}
[INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"For path: http_address. Map keys: [:stats, :os, :jvm]", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}
[INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[.monitoring-logstash]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:239:in `block in start_workers'"}]}}
[ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[INFO ][logstash.runner ] Logstash shut down.
You should not add this. This controls the client key and certificate that Logstash would use to connect to Elasticsearch in order to perform client TLS authentication. This is not required, Elasticsearch is not setup for this and this is irrelevant to the problems you are facing.
Yes, this is the subject, but how about the SANs of the certificate ? Is the hostname defined as one there too ?
Also, does kibana connect successfully to elasticsearch with the exact configuration you have shared, or did you comment/uncomment anything ? Looking at this #elasticsearch.ssl.verificationMode: none , is it also commented out and Kibana connects just fine ?
Correct I understood the same . Not adding those anywhere in my logstash configuration.
Yes kibana and elasticsearch works fine and are connected to each other with those configuration i mentioned above ( kibana.yml and elasticsearch.yml).
my certificate looks like this . This is not ca-cert .
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.