Can't connect from fluentd: NotSslRecordException error

I have deployed Elastic stack on k8s [running with kops on AWS] with almost default configuration [just different namespace].

I have deployed a fluentd daemonset in the same namespace and I am trying to connect to Elastic, but from fluentd I am getting:
[warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. end of file reached (EOFError)

And from elastic instance I am getting:
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record

What am I missing in my configuration? How can I make it work? My fluentd config is fairly default:

<match **>
@type elasticsearch
@id out_es
@log_level "info"
include_tag_key true
host "elastic-es-http.elastic-system.svc.cluster.local"
port 9200
path ""
scheme http
ssl_verify true
ssl_version TLSv1
reload_connections false
reconnect_on_error true
reload_on_failure true
log_es_400_reason false
logstash_prefix "logstash"
logstash_format true
index_name "logstash"
type_name "fluentd"

Hey @Paul_F,

You should probably configure fluentd to use HTTPS instead of HTTP. I don't know that much about fluentd configuration, but it looks like you have:
scheme http
Which should probably instead:
scheme https

Also you need to either provide Elasticsearch TLS certificate or CA to fluentd so it can trust the connection, either bypass TLS certificate verification, I guess with:

ssl_verify false

You were right @sebgl , scheme should be https.

How can I find the right certificate though? My deployment is not "quickstart", but "elastic", so I was trying:

      - name: CLIENT_CERT
        valueFrom:
          secretKeyRef:
            name: elastic-es-http-certs-public
            key: tls.crt

But then I get this eror:
"Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: unknown_ca",

Am I referencing the right certificate? Is there something else that has to be added?

Did you fix this?

@Kay_Khan I just got this working in our cluster by using the autogenerated certs stored in the secrets suffixed with http-certs-internal, this also depended on joining the networks between the daemonset and the deployed elastic cluster.

@SeanPlacchetti the secret called $CLUSTERNAME-es-http-certs-public would be preferable, as the one you referenced is meant for internal use of the operator.

Good to know, thanks!

@pebrc I went to try and and use $CLUSTERNAME-es-http-certs-public (as we've recently updated every ECK deployed resource to use the 1.2 branch from beta) to redirect our FluentD from the OpenShift 3.11 built-in EFK stack and found that the above secret does not include a key. Not sure why, but the FluentD setup for OpenShift 3.11 requires: Sending Logs to an External Elasticsearch Instance If you have any recommendations on how to use the $CLUSTERNAME-es-http-certs-public with that, I'd appreciate the guidance. What are the negatives to using the $CLUSTERNAME-es-http-certs-internal? Thanks!