I am facing connection issues from logstash to my elastic stack. The very strange this is, that it worked until yesterday. Today (without any modification from my side) I get:
Dec 17 22:00:55 ubuntu logstash-app[25173]: [2021-12-17T22:00:55,819][WARN ]
[logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance,
but got an error. {:url=>"https://user:xxxxxx@elastic.xxx.com:9200/", :error_type=>
LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>
"Elasticsearch Unreachable: [https://user:xxxxxx@elastic.xxx.com:9200/]
[Manticore::ClientProtocolException] PKIX path validation failed:
java.security.cert.CertPathValidatorException: validity check failed"}
My logstash output section is:
output {
elasticsearch {
hosts => "https://elastic.xxx.com:9200"
ssl => true
ssl_certificate_verification => false # add this to test
document_id => "..."
index => "..."
user => "user"
password => "password"
doc_as_upsert => true
action => "update"
}
}
I did not need / have certification check for ther cert and would like to leave it like this. I have no keystore or truststore. Only "simple SSL".
This works from the cmd line:
curl -X GET https://user:password@elastic.xxx.com:9200 -k
According to this issue, setting :ssl_verify to false does not turn off as much verification as 'curl -k'. If the problem really did start without any changes being made I would look for an expiration of something in the certification path.
Now our 2nd server instance of logstash (other host, other logstash version) is having the issue.
Can this be somewhat related to the Log4j security breach? The only thing we did was running apt update && apt upgrade on the host of our elastic server. But elastic is running in a docker even.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.