SSL issues with the Docker setup

I'm on:

  • MacOS 14.2.1 (Sonoma on an M1 Macbook Pro)
  • OpenSSL 3.2.1
  • Docker Desktop has 8GB of RAM allocated to it

I'm following this guide to setup Elasticsearch + Kibana locally for development purposes, but am encountering a number of issues, the biggest one being some sort of issue with TLS/SSL when trying to connect to the Elasticsearch cluster.

Following the steps in the guide, when I get to the part where you ping the ES container with curl and the --cacerts flag, I get the following error:

curl --cacert http_ca.crt -v -u elastic:$ELASTIC_PASSWORD https://localhost:9200/

*   Trying [::1]:9200...
* Connected to localhost (::1) port 9200
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
*  CAfile: http_ca.crt
*  CApath: none
* LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
* Closing connection
curl: (35) LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version

I've deleted and restarted the setup from scratch multiple times, and every time I get stuck here, with this error, and I'm not sure how to proceed.

If I ping http instead, it works fine:

curl --cacert http_ca.crt -v -u elastic:$ELASTIC_PASSWORD http://localhost:9200/

{
  "name" : "ctXSsVx",
  "cluster_name" : "elasticsearch_brew",
  "cluster_uuid" : "MFpSeyKBQne-qs09ZoYHlw",
  "version" : {
    "number" : "6.8.23",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "4f67856",
    "build_date" : "2022-01-06T21:30:50.087716Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.3",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

I tried setting the ENV flag to ENV=ELASTICSEARCH_HOSTS=http://localhost:9200 when trying to run Kibana which will actually sort of work. It takes me to the Configuration page where I have to input my enrollment token, however once I paste the token in I immediately get a connect ECONNREFUSED 127.0.0.1:9200 error. I tried doing the manual setup and setting this to https AND http, however the same thing happens regardless.

Any help would be greatly appreciated here, I'm quite stuck!

Small update: After following this other guide which uses 8.13.0, I got Kibana and ES up and running, however I'm still having the same issues when trying to curl with the http_ca.crt file to https://localhost:9200 (which also causes issues elsewhere, like my local node app not being able to do anything on ES due to this TLS issue)

Hi @Sensanaty,

Welcome to the Elastic community. The docker steps worked for me. I referred the same doc.

Seems TLS is not configure properly. Which OS you are using ? Can you hit below command.

openssl s_client -connect localhost:9200

Just verify is it returning certificate or not?

Thanks for the reply.

I'm on MacOS 14.2.1 (M1 pro).

Running that tells me the below, which confuses me a bit more cause I just copy/pasted the commands from the guide itself, so no clue why the crt file would be invalid/wrong. The crt is coming straight from the ES docker container, via docker cp.

openssl s_client -cert http_ca.crt -connect localhost:9200

Could not find client certificate private key from http_ca.crt

And here's the certificate (it's only used locally so I'm not leaking anything I shouldn't be, don't worry)

cat http_ca.crt
-----BEGIN CERTIFICATE-----
MIIFWTCCA0GgAwIBAgIULb404g4y0dc3lIVL2fXEeTMfSe4wDQYJKoZIhvcNAQEL
BQAwPDE6MDgGA1UEAxMxRWxhc3RpY3NlYXJjaCBzZWN1cml0eSBhdXRvLWNvbmZp
Z3VyYXRpb24gSFRUUCBDQTAeFw0yNDAzMjYxMzU3MTlaFw0yNzAzMjYxMzU3MTla
MDwxOjA4BgNVBAMTMUVsYXN0aWNzZWFyY2ggc2VjdXJpdHkgYXV0by1jb25maWd1
cmF0aW9uIEhUVFAgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC/
bjVo7TsfT69O0n+RMbGndPr76sA9PoniOuEVQLDoeHNDocMb85C7LMtCAnyyBFo7
cUyPDvcyMPSsqxkoELG98+xpfXtNOYJMMBQEg8Bt5dXpjuef1bTsixPnjTuTrfol
hY5kV2qx2qPZAbZolfunzJHlfEqildVT1RjkN+K7tAaUe7GLUi5f9RHyJQX70pI1
47nUpQ7Cpx8LdzQiNnNkP37w0rYDmjQdvAbhbowDnsRCOu5Tr6uLH8K1WbG5NQ0U
0u50GagNlBWwnL1O8IfDM+fPD40/SzDbvtgHmR2fVvZPUuo/ezFtHrKX3kUdZI6x
SL3hzGjqJ5Vyn/TUNU12gy1u1K6IUFDJm0pHi/nxhWQIAF3A8On3HXROla6xucPp
71Ce17HeqeWE4fEAA1NCLvUaTOR3OY+g1Nc+pdO1aWakvsiLJ8vr9lJpdafjjok3
V+GxwI21SBlX9siB7W7uzH6EEPsJ6JfRQRa5eyWrYKpHYGCLIk+vfI5ElAk3TGr9
JvXRIhDiWcEjBXCLCFxJUXE/H+THQxbogXZ9BAOHnzpb06t1rJNgipnmxXzQh1sU
iA2xI8E/lsqNs6Dlc3udI7E9q2GL+bzzYfVRVsqK59ort2oHG5b7NMR+GTXOMfj8
hGsMgp8dwTIE6dK6LRxNLrnmmLj//PjcDkn9wt+jFQIDAQABo1MwUTAdBgNVHQ4E
FgQUXEVCxl8s13HXX8ayq7eGpJjU5BEwHwYDVR0jBBgwFoAUXEVCxl8s13HXX8ay
q7eGpJjU5BEwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAaGg8
NlcVRaKH1d2kCPHpBXnE0tq9umB/ABOGHemuKZ9XoMTs6ll2M8mA/S/+ctoTEkvC
zDnSm9/UOrjD9KRpZQREJ2078dXlVBlqNgkDsayw8n3QUbNpLU5SsSQRkMD77mUd
9hZZiH/AViDCf76QEkRpT3Ng20roeVipsllzz0NTprSfM086uDggiBQ43CEQH300
1QPo++7p3na2cNnVnzLVzZ/oPRNO99tgXbzV9sF0WXI6+Hq9RWf4IeiNIsX1u4nJ
DSYrPUClla2fkXx2n4OlOjol2Yjvq9OQkrpx2HDqdCjQTZzoaAOa7BNJL4v0Jngl
j8pSXX6R4KypmPML+IpxLlX6h8WLAGszrcaXqYcw2vO+8lhPcYPk6cptMJrCgclx
I99vfft0Gg5NdaE141AzZ7oLkbwxSeVAbAfsR95SyZYyTYDdQB/oDrO2Py3nd9IK
gnjNDgAHXlAIkJqDeyiGixcfvvKM14FvmVvK2vZMvN3ArlRDLd2X2gZBlqayRE8k
F9255KJF7WLVCKfwg5btY13wb8DESijt/3cs4H6Oiv86tdUupgAxRBw4icTGHthv
aI8YIrWAi6+g7FFK9Za2+qAO9E3pL72VF2r5i/ClpzwdVvs6lguAJlDHDlIle6er
MP1yahmDu7Hl4dnFGWXhyAiVSH3nMZ9yde4j2Fg=
-----END CERTIFICATE-----

A single Elasticsearch node will only http or https but not both.

If http://localhost:9200/ is working for you, then https://localhost:9200 will not, because you cannot have both.

This suggests that your node is configured to use http instead of https

If you follow the instructions you linked to, then it should all be configured for https. It's not clear what went wrong here.

Can you check what configuration you have for xpack.security.http.ssl in elasticsearch.yml

I'm also not entirely sure what went/is going wrong, this is my first time setting ES & Kibana up and it's via docker, plus I'm not deviating at all from the guide :frowning:

These are my kibana.yml and elasticsearch.yml files

Elastic

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 26-03-2024 15:29:35
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

Kibana

### >>>>>>> BACKUP START: Kibana interactive setup (2024-03-26T15:41:55.465Z)

#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
#server.host: "0.0.0.0"
#server.shutdownTimeout: "5s"
#elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#monitoring.ui.container.elasticsearch.enabled: true
### >>>>>>> BACKUP END: Kibana interactive setup (2024-03-26T15:41:55.465Z)

# This section was automatically generated during setup.
server.host: 0.0.0.0
server.shutdownTimeout: 5s
elasticsearch.hosts: ['https://172.24.0.2:9200']
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MTE0Njc3MTQ2Mjg6STBEZ25ZcWlSMVM4a2J4V0lWRWhFQQ
elasticsearch.ssl.certificateAuthorities: [/usr/share/kibana/data/ca_1711467715461.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://172.24.0.2:9200'], ca_trusted_fingerprint: c0655a5bbc633a71dc83b18272d08bcd19697edff549d91bf8203f08f25bbc0d}]

Okay I figured it out, and I'm still a bit confused :sweat_smile:

It turns out this entire time I had a local Elasticsearch version running on my the 9200 port which I've forgotten about. The part that confuses me to no end is how Docker was able to use the 9200 port when it was already being used by a different running process!

So this entire time all my requests to localhost:9200 was actually targeting my local elasticsearch setup (which was v6, even), rather than the Docker one.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.