Issue with output elasticsearch running on ECK "HostUnreachableError"

Hi there !

So first of all, thank your for reading my post.

I have already browsed the existing topics on discuss about that kind of issue and that is what I have tried without any success:

  • Setup ssl_certificate_verification to false
  • Tried to use the generated ca.crt generated by ECK during setup

What ever I do, I sill have in Logstash logs:

[2020-03-11T11:19:53,509][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx@quickstart-es-http.default.svc.cluster.local:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx@quickstart-es-http.default.svc.cluster.local:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}

However, from within the container:

curl -k "https://elastic:${ELASTIC_PASSWORD}@quickstart-es-http.default.svc.cluster.local:9200"
{
  "name" : "quickstart-es-default-0",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "-jsLCLX4TeKWHEnRlrpo0g",
  "version" : {
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

So it seems that the DNS resolution and network are OK ...

So questions:

  • Why the ssl_certificate_verification => false does not work ....
  • What kind of certificate of ECK can I use

Because, indeed, trying to use the cert file from ECK do not work either

$ curl --cacert /ca.crt "https://elastic:${ELASTIC_PASSWORD}@quickstart-es-http.default.svc.cluster.local:9200/"

curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
$ kubectl get secrets quickstart-es-http-certs-public -o go-template='{{index .data "ca.crt" | base64decode }}' > ca-public.crt

$ openssl x509 -noout -subject -in ca-public.crt 
subject=OU = quickstart, CN = quickstart-http
$ kubectl get secrets quickstart-es-http-certs-public -o go-template='{{index .data "tls.crt" | base64decode }}' > tls-public.crt

$ openssl x509 -noout -subject -in tls-public.crt 
subject=OU = quickstart, CN = quickstart-es-http.default.es.local

Almost forgot:

$ logstash --version

logstash 7.4.2

From the official docker image from Elastic

Thank you for your help !

Update:

I have updated ECK certificate by adding a subjectAltNames following this documentation:

So now, no error anymore with curl

sh-4.2$ curl --cacert /ca.crt "https://elastic:${ELASTIC_PASSWORD}@quickstart-es-http.default.svc.cluster.local:9200/"

{
  "name" : "quickstart-es-default-0",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "-jsLCLX4TeKWHEnRlrpo0g",
  "version" : {
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

But still in Logstash ...

[2020-03-11T14:01:38,232][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx@quickstart-es-http.default.svc.cluster.local:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx@quickstart-es-http.default.svc.cluster.local:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}

Relative config

$ cat pipelines/output/es.conf

output {
      elasticsearch {
        hosts    => [ "${ELASTIC_HOST}" ]
        user     => "${ELASTIC_USERNAME}"
        password => "${ELASTIC_PASSWORD}"
        ssl      => true
        ssl_certificate_verification => true
        cacert   => '/ca.crt'

        index    => "logs-%{[@metadata][index_prefix]}-%{[@metadata][index_name]}-%{+YYYY.MM.dd}"
        id       => "output-elasticsearch"
      }
}
$ printenv | grep ELASTIC_

ELASTIC_USERNAME=elastic
ELASTIC_SSL_CERTIFICATE_VERIFICATION=false
ELASTIC_PASSWORD=XXXXXX
ELASTIC_HOST=https://quickstart-es-http.default.svc.cluster.local:9200

While setting ssl_certificate_verification => false I see in the logs

[2020-03-11T12:30:56,093][WARN ][logstash.outputs.elasticsearch] ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true

...


[2020-03-11T12:30:57,589][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx@quickstart-es-http.default.svc.cluster.local:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx@quickstart-es-http.default.svc.cluster.local:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}

:cry:

Nobody have any idea about this SSL stuff ?

It is so disappointing that SSL seems easy to understand but always a pain to setup.

So I have stumbled upon a bunch of GitHub Issues regarding the weird behaviors I have noticed on my setup:

But in the end, I have manged to get it work as I wanted.

The trick was:

  • I have 2 outputs of type elasticsearch defined. Each output has a different index pattern.
  • 1 Was configured properly with the TLS but not the other
  • And ... of course, the one getting events was the one mis-configured

Once the 2 elasticsearch output plugins were properly configured, no more message and events were ingested by ES.

Nevertheless, I still had to update the SubjectAlternateNames of the self-signed certificate of ECK to make it work properly, if someone has the same problem.