[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. self signed certificate

Hi,

I am facing an issue while integrating kibana with elasticsearch. I have created my own company rootCA and created kibana.crt and kibana.key and registered with rootCA. After I run the docker compose up , I was not able to see the login page when I try to access localhost:5601. And in the kibana logs I can see this issue.
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. self signed certificate

here is my docker compose file

services:
  elasticsearch:
    image: elasticsearch-8.14.0:latest
    #image: docker.elastic.co/elasticsearch/elasticsearch:8.17.1
    container_name: elasticsearch
    hostname: elasticsearch
    networks:
      - elastic_net
    environment:
      - discovery.type=single-node   
      - xpack.security.enabled=true # Disable security for simplicity
      - xpack.security.http.ssl.enabled=true
      - xpack.security.transport.ssl.enabled=true      
    healthcheck:
      test: [ "CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -qE '\"status\":\"(yellow|green)\"'" ]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 10s
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
      - elastic-data:/elasticsearch/data
      - ../ssl/root/rootCA.crt:/usr/share/elasticsearch-8.14.0/config/root/rootCA.crt
      - ../ssl/elasticsearch/elasticsearch.crt:/usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.crt
      - ../ssl/elasticsearch/elasticsearch.key:/usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.key     
      - ../ssl/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch-8.14.0/config/elasticsearch.yml         

  
  kibana:
    container_name: kibana
    depends_on:
      elasticsearch:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:8.5.1
    ports:
      - "5601:5601"
    environment:
      #- SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://elasticsearch:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=password
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=/usr/share/kibana/config/certs/root/rootCA.crt
      - XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTIONKEY}
      - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTIONKEY}
      - XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTIONKEY}
      - XPACK_REPORTING_KIBANASERVER_HOSTNAME=localhost
      - SERVER_SSL_ENABLED=true
      - SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/certs/kibana.crt
      - SERVER_SSL_KEY=/usr/share/kibana/config/certs/kibana.key      
      #- SERVER_SSL_CERTIFICATEAUTHORITIES=/usr/share/kibana/config/certs/root/rootCA.crt
    volumes:
      - ../ssl/kibana/kibana.crt:/usr/share/kibana/config/certs/kibana.crt  
      - ../ssl/kibana/kibana.key:/usr/share/kibana/config/certs/kibana.key
     # - ../ssl/kibana/kibana_pkcs8.key:/usr/share/kibana/config/certs/kibana_pkcs8.key
      - ../ssl/root/rootCA.crt:/usr/share/kibana/config/certs/root/rootCA.crt
      - ../ssl/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml    
    networks:
      - elastic_net
networks:
  elastic_net:
    driver: bridge

volumes:
  elastic-data:

Here is my Kibana.yaml file
server.host: "0.0.0.0"
elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/config/certs/root/rootCA.crt
elasticsearch.ssl.verificationMode: certificate

Can some one help me what is the problem.

Thanks
Rupa

Hi @Rupavathi Welcome to the commuity!

try Exec into the Kibana Container when it is running and try this and show results.

curl -v --cacert /usr/share/kibana/config/certs/root/rootCA.crt -u elastic https://elasticsearch:9200

Welcome to the forum @Rupavathi !

There is a bit of a version salad going on here, elasticsearch 8.14.0 (commented out 8.17.1) and kibana 8.5.1 ? Not your current issue, but be careful with that going forward.

Also recall that stuff is saved into the indices which are in the persistent mount elastic-data:/elasticsearch/data if elasticsearch starts correctly. If you (while trying to get things to work) keep stopping and starting the containers, it does not mean that you are starting each time from a clean slate. If you want to start each time from clean slate, you need clear out the data directory each time.

Obviously your mounted elasticsearch.yml and kibana.yml settings needs to precisely match the stuff (e.g. correct paths) in the other settings too.

1 Like

Hi @stephenb & @RainTown
I tried cleaning up the volumes and did the docker compose to create the containers again. I tried running the curl command encounters ssl issue.

$ curl -v --cacert /usr/share/kibana/config/certs/root/rootCA.crt -u elastic https://elasticsearch:9200
Enter host password for user 'elastic':
*   Trying 172.18.0.2:9200...
* TCP_NODELAY set
* Connected to elasticsearch (172.18.0.2) port 9200 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /usr/share/kibana/config/certs/root/rootCA.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: self signed certificate
* Closing connection 0
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

But I am able to get the response from elasticsearch with the same ssl rootCA. I dont understand why kibana is throwing an error. Could you please me . Thanks in advance.

Regards
Rupa

The above curl is the same basic connection method that Kibana uses.

It is failing on the certificate validation

Show us exactly.

Run the exact same curl command from inside the elasticsearch container and show the command and output

Also run from inside Kibana, this will test without SSL verification

curl -v -k -u elastic https://elasticsearch:9200

I tried running the command from elasticsearch container.

$ curl -v --cacert /usr/share/elasticsearch/config/root/rootCA.crt -u elastic https://elasticsearch:9200
Enter host password for user 'elastic':
* Host elasticsearch:9200 was resolved.
* IPv6: (none)
* IPv4: 172.20.0.2
*   Trying 172.20.0.2:9200...
* Connected to elasticsearch (172.20.0.2) port 9200
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error setting certificate file: /usr/share/elasticsearch/config/root/rootCA.crt
* error setting certificate file: /usr/share/elasticsearch/config/root/rootCA.crt
* closing connection #0
curl: (77) error setting certificate file: /usr/share/elasticsearch/config/root/rootCA.crt

But when I try hitting the https://localhost:9200 - elasticsearch url after adding certificates can see the response from Elasticsearch. The same certs I mounted to docker container.

{
    "name": "elasticsearch",
    "cluster_name": "elasticsearch",
    "cluster_uuid": "HCfx0DAURUuOcXYUYTguoA",
    "version": {
        "number": "8.14.0",
        "build_flavor": "default",
        "build_type": "tar",
        "build_hash": "8d96bbe3bf5fed931f3119733895458eab75dca9",
        "build_date": "2024-06-03T10:05:49.073003402Z",
        "build_snapshot": false,
        "lucene_version": "9.10.0",
        "minimum_wire_compatibility_version": "7.17.0",
        "minimum_index_compatibility_version": "7.0.0"
    },
    "tagline": "You Know, for Search"
}

Apologies my bad , I gave the wrong folder name of Elasticsearch. Here is the correct response am getting when hitting curl command from the Elasticsearch container.

curl -v --cacert /usr/share/elasticsearch-8.14.0/config/root/rootCA.crt -u elastic https://elasticsearch:9200
Enter host password for user 'elastic':
* Host elasticsearch:9200 was resolved.
* IPv6: (none)
* IPv4: 172.20.0.2
*   Trying 172.20.0.2:9200...
* Connected to elasticsearch (172.20.0.2) port 9200
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /usr/share/elasticsearch-8.14.0/config/root/rootCA.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: self-signed certificate
* closing connection #0
curl: (60) SSL certificate problem: self-signed certificate
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the webpage mentioned above.

Apologies I am confused what worked what didn't and why.

In short you have a certs / CA issue.

It would have helped if you showed the on that worked with the complete command

Now I take a closer look where did you set all the actual SSL settings for http and transport? In your elasticsearch. yml?

Perhaps share that .. and why are you using that ? Might be easier to debug if you put at the settings as environment

You need to get the curls working otherwise nothing will work.

Perhaps you should start / bootstrap from known working configuration

Many people start with this and iterate, you could start with this....

Take out the setup

Put your certs / volumes in etc.. you could

Look at this too

The perhaps edit for your certs..

If you share the full story it will help.

The exact commands you used to generate your certificates
The actual contents of the kibana.yml and elasticsearch.yml
The current state of you docker-compose.yml
The boot up log (output from docker-compose up)

Also, as Stephen said, it's often just easier to start with the supplied config and then tune later. If you are new to elasticsearch, then it's not trivial to setup the config files yourself from scratch.

I tried hitting the Elasticsearch url from the postman client.

I have added the certificates in the postman settings and able to get the response.

. Reponse I am getting as below.

{
    "name": "elasticsearch",
    "cluster_name": "elasticsearch",
    "cluster_uuid": "HCfx0DAURUuOcXYUYTguoA",
    "version": {
        "number": "8.14.0",
        "build_flavor": "default",
        "build_type": "tar",
        "build_hash": "8d96bbe3bf5fed931f3119733895458eab75dca9",
        "build_date": "2024-06-03T10:05:49.073003402Z",
        "build_snapshot": false,
        "lucene_version": "9.10.0",
        "minimum_wire_compatibility_version": "7.17.0",
        "minimum_index_compatibility_version": "7.0.0"
    },
    "tagline": "You Know, for Search"
}

If the I had an issues with certificates, Postman also should give the same problem but it is working in postman.

Here are the commands that I have used to generate the certs.

Elastic search:

openssl genpkey -algorithm RSA -out elasticsearch.key
openssl req -new -key elasticsearch.key -out elasticsearch.csr -subj "/C=IN/ST=STATE/L=CITY/O=COMPANY/OU=UNIT/CN=COMAPANY.com"
openssl x509 -req -in elasticsearch/elasticsearch.csr -CA root/rootCA.pem -CAkey root/rootCA.key -CAcreateserial -out elasticsearch/elasticsearch.crt -days 365 -sha256 -extfile elasticsearch/extfile_elasticsearch.cnf

Kibana:
openssl genpkey -algorithm RSA -out kibana.key
openssl req -new -key kibana.key -out kibana.csr -subj "/C=IN/ST=STATE/L=CITY/O=COMPANY/OU=UNIT/CN=COMAPANY.com"
openssl x509 -req -in kibana/kibana.csr -CA root/rootCA.pem -CAkey root/rootCA.key -CAcreateserial -out kibana/kibana.crt -days 365 -sha256 -extfile kibana/extfile_kibana.cnf

extfile_elasticsearch.cnf:
subjectAltName = DNS:elasticsearch, DNS:localhost, IP:127.0.0.1

extfile_kibana.cnf
subjectAltName = DNS:kibana, DNS:localhost, IP:127.0.0.1
Docker compose file:

services:
  elasticsearch:
    image: elasticsearch-8.14.0:latest    
    container_name: elasticsearch
    hostname: elasticsearch
    networks:
      - elastic_net
    environment:
      - discovery.type=single-node   
      - xpack.security.enabled=true # Disable security for simplicity
      - xpack.security.http.ssl.enabled=true
      - xpack.security.transport.ssl.enabled=true      
    healthcheck:
      test: [ "CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -qE '\"status\":\"(yellow|green)\"'" ]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 10s
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
      - elastic-data:/elasticsearch/data
      - ../ssl/root/rootCA.crt:/usr/share/elasticsearch-8.14.0/config/root/rootCA.crt
      - ../ssl/elasticsearch/elasticsearch.crt:/usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.crt
      - ../ssl/elasticsearch/elasticsearch.key:/usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.key      
      - ../ssl/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch-8.14.0/config/elasticsearch.yml  
  
  kibana:
    container_name: kibana
    depends_on:
      elasticsearch:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:8.5.1
    ports:
      - "5601:5601"
    environment:
      #- SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://elasticsearch:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=password
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=/usr/share/kibana/config/certs/root/rootCA.crt
      - XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTIONKEY}
      - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTIONKEY}
      - XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTIONKEY}
      - XPACK_REPORTING_KIBANASERVER_HOSTNAME=localhost
      - SERVER_SSL_ENABLED=true
      - SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/certs/kibana.crt
      - SERVER_SSL_KEY=/usr/share/kibana/config/certs/kibana.key      
      #- SERVER_SSL_CERTIFICATEAUTHORITIES=/usr/share/kibana/config/certs/root/rootCA.crt
    volumes:
      - ../ssl/kibana/kibana.crt:/usr/share/kibana/config/certs/kibana.crt  
      - ../ssl/kibana/kibana.key:/usr/share/kibana/config/certs/kibana.key
     # - ../ssl/kibana/kibana_pkcs8.key:/usr/share/kibana/config/certs/kibana_pkcs8.key
      - ../ssl/root/rootCA.crt:/usr/share/kibana/config/certs/root/rootCA.crt
      - ../ssl/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml    
    networks:
      - elastic_net
networks:
  elastic_net:
    driver: bridge

volumes:
  elastic-data:
.env

ELASTIC_PASSWORD=password
KIBANA_PASSWORD=password
ENCRYPTIONKEY=32 digits code
elasticsearch.yaml

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  certificate: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.crt
  key: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.key
  certificate_authorities: /usr/share/elasticsearch-8.14.0/config/root/rootCA.crt
  keystore.password: "changeit" 
  client_authentication: required

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  certificate: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.crt
  key: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.key
  certificate_authorities: /usr/share/elasticsearch-8.14.0/config/root/rootCA.crt
  keystore.password: "changeit"
  truststore.password: "changeit"


# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
Kibana.yaml:
server.host: "0.0.0.0"
elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/config/certs/root/rootCA.crt
elasticsearch.ssl.verificationMode: certificate

Please help me

Thanks

Try commenting this out in the elasticsearch.yml .. this requires the client presents a certificate which Kibana is not set up to do.

Then try the curl again from the Kibana container to the elasticsearch container

In your Postman you provided certs ... curl does not.. Kibana as it is setup does not

Tried commenting the line that you have suggested.

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  certificate: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.crt
  key: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.key
  certificate_authorities: /usr/share/elasticsearch-8.14.0/config/root/rootCA.crt
  keystore.password: "changeit"
  #key_passphrase: "changeit"
  #client_authentication: required

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  certificate: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.crt
  key: /usr/share/elasticsearch-8.14.0/config/elasticsearch/elasticsearch.key
  certificate_authorities: /usr/share/elasticsearch-8.14.0/config/root/rootCA.crt
  keystore.password: "changeit"
  truststore.password: "changeit"
 # key_passphrase: "changeit"

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

I still see the same issue with the curl.

First, please don't paste pictures of text. Please paste the text next time.

And to confirm, you commented out the line and restarted elasticsearch?

And from inside the Kibana container you can read that CA file?

Hmmmm. Just for debugging try adding the -k to the curl to see if it goes through

Yes , I commented the client_authentication: required and removed all the images and elastic data also from docker desktop and rebuild the image using docker compose up --build command and hit the api using curl in the kibana container. I can see the curl response skipping the ssl certificate verification

curl -k --cacert /usr/share/kibana/config/certs/root/rootCA.crt -u elastic https://elasticsearch:9200
Enter host password for user 'elastic':
{
  "name" : "elasticsearch",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "q8Zzb1CdRpibOa21sXQMgw",
  "version" : {
    "number" : "8.14.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "8d96bbe3bf5fed931f3119733895458eab75dca9",
    "build_date" : "2024-06-03T10:05:49.073003402Z",
    "build_snapshot" : false,
    "lucene_version" : "9.10.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

Accessing the http://localhost:5601/ from the browser giving me below err
Kibana server is not ready yet.

Kibana logs

2025-03-24 11:48:50 [2025-03-24T06:18:50.353+00:00][INFO ][node] Kibana process configured with roles: [background_tasks, ui]
2025-03-24 11:48:59 [2025-03-24T06:18:59.711+00:00][INFO ][plugins-service] Plugin "cloudExperiments" is disabled.
2025-03-24 11:48:59 [2025-03-24T06:18:59.725+00:00][INFO ][plugins-service] Plugin "profiling" is disabled.
2025-03-24 11:48:59 [2025-03-24T06:18:59.841+00:00][INFO ][http.server.Preboot] http server running at http://0.0.0.0:5601
2025-03-24 11:48:59 [2025-03-24T06:18:59.892+00:00][INFO ][plugins-system.preboot] Setting up [1] plugins: [interactiveSetup]
2025-03-24 11:48:59 [2025-03-24T06:18:59.940+00:00][WARN ][config.deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
2025-03-24 11:49:00 [2025-03-24T06:19:00.235+00:00][INFO ][plugins-system.standard] Setting up [125] plugins: [translations,monitoringCollection,licensing,globalSearch,globalSearchProviders,features,mapsEms,licenseApiGuard,usageCollection,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,screenshotMode,banners,newsfeed,guidedOnboarding,fieldFormats,expressions,dataViews,embeddable,uiActionsEnhanced,charts,esUiShared,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,spaces,security,lists,files,encryptedSavedObjects,cloud,snapshotRestore,screenshotting,telemetry,licenseManagement,eventLog,actions,stackConnectors,console,bfetch,data,watcher,reporting,fileUpload,ingestPipelines,alerting,aiops,unifiedSearch,unifiedFieldList,savedSearch,savedObjects,graph,savedObjectsTagging,savedObjectsManagement,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,controls,eventAnnotation,dataViewFieldEditor,triggersActionsUi,transform,stackAlerts,ruleRegistry,discover,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,cloudSecurityPosture,discoverEnhanced,visualizations,canvas,visTypeXy,visTypeVislib,visTypeVega,visTypeTimeseries,rollup,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeHeatmap,visTypeMarkdown,dashboard,dashboardEnhanced,expressionXY,expressionTagcloud,expressionPartitionVis,visTypePie,expressionMetricVis,expressionLegacyMetricVis,expressionHeatmap,expressionGauge,lens,maps,dataVisualizer,cases,timelines,sessionView,kubernetesSecurity,observability,osquery,ml,synthetics,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,visTypeGauge,dataViewManagement]
2025-03-24 11:49:00 [2025-03-24T06:19:00.258+00:00][INFO ][plugins.taskManager] TaskManager is identified by the Kibana UUID: 2bbbd5e7-f6e5-4cec-ae25-a7364971df05
2025-03-24 11:49:00 [2025-03-24T06:19:00.361+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2025-03-24 11:49:00 [2025-03-24T06:19:00.362+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
2025-03-24 11:49:00 [2025-03-24T06:19:00.398+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2025-03-24 11:49:00 [2025-03-24T06:19:00.398+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
2025-03-24 11:49:00 [2025-03-24T06:19:00.408+00:00][WARN ][plugins.encryptedSavedObjects] Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2025-03-24 11:49:00 [2025-03-24T06:19:00.426+00:00][WARN ][plugins.actions] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2025-03-24 11:49:00 [2025-03-24T06:19:00.534+00:00][WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2025-03-24 11:49:00 [2025-03-24T06:19:00.536+00:00][WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
2025-03-24 11:49:00 [2025-03-24T06:19:00.542+00:00][WARN ][plugins.alerting] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
2025-03-24 11:49:00 [2025-03-24T06:19:00.605+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
2025-03-24 11:49:00 [2025-03-24T06:19:00.653+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
2025-03-24 11:49:01 [2025-03-24T06:19:01.276+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
2025-03-24 11:49:01 [2025-03-24T06:19:01.413+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. self signed certificate
2025-03-24 11:49:01 [2025-03-24T06:19:01.912+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell

But I want to test the api in secure mode and also kibana is not getting started along with Elasticsearch when I do docker compose up --build. From docker desktop I have to restart the kibana manually.

thanks for sharing the various info.

Just to check some things, right at start you wrote:

I have created my own company rootCA

but you did not share how you did so?

Second, the commands you shared did not actually work, probably you missed some "cd" commands in between. if I start form this:

$ find .
./kibana
./kibana/extfile_kibana.cnf
./root
./create-stuff
./elasticsearch
./elasticsearch/extfile_elasticsearch.cnf

and run this

subj="/C=IN/ST=STATE/L=CITY/O=COMPANY/OU=UNIT/CN=COMAPANY.com"
cd root
openssl genrsa -des3 -out rootCA.key 2048
openssl req -x509 -new -nodes -key rootCA.key -days 3650 -out rootCA.pem -subj "${subj}"
cd -
openssl genpkey -algorithm RSA -out elasticsearch.key
openssl req -new -key elasticsearch.key -out elasticsearch/elasticsearch.csr -subj "${subj}"
openssl x509 -req -in elasticsearch/elasticsearch.csr -CA root/rootCA.pem -CAkey root/rootCA.key -CAcreateserial -out elasticsearch/elasticsearch.crt -days 365 -sha256 -extfile elasticsearch/extfile_elasticsearch.cnf
openssl genpkey -algorithm RSA -out kibana.key
openssl req -new -key kibana.key -out kibana/kibana.csr -subj "${subj}"
openssl x509 -req -in kibana/kibana.csr -CA root/rootCA.pem -CAkey root/rootCA.key -CAcreateserial -out kibana/kibana.crt -days 365 -sha256 -extfile kibana/extfile_kibana.cnf

after typing passphrase a bunch of times I end up with

$ find .
.
./elasticsearch.key
./kibana
./kibana/kibana.csr
./kibana/extfile_kibana.cnf
./kibana/kibana.crt
./kibana.key
./root
./root/rootCA.srl
./root/rootCA.pem
./root/rootCA.key
./create-stuff
./elasticsearch
./elasticsearch/elasticsearch.csr
./elasticsearch/elasticsearch.crt
./elasticsearch/extfile_elasticsearch.cnf

which seems to match your structure. Does that look about right?

You still didn't explain the reasoning behind the version salad - why are you attracted to elasticsearch-8.14.0 / kibana:8.5.1 ?

You didn't share the "docker-compose up" output from a clean slate.

This is my folder structure

C:. 

¦       
+---docker-compose
¦       .env
¦       docker-compose - Copy.yml
¦       docker-compose.yml       
¦       
+---ssl
¦   ¦   
¦   +---elasticsearch
¦   ¦   ¦   elasticsearch.crt
¦   ¦   ¦   elasticsearch.csr
¦   ¦   ¦   elasticsearch.key
¦   ¦   ¦   elasticsearch_pkcs8.key
¦   ¦   ¦   extfile_elasticsearch.cnf
¦   ¦   ¦   
¦   ¦   +---config
¦   ¦           elasticsearch.yml
¦   ¦           
¦   +---kibana
¦   ¦   ¦   extfile_kibana.cnf
¦   ¦   ¦   kibana.crt
¦   ¦   ¦   kibana.csr
¦   ¦   ¦   kibana.key
¦   ¦   ¦   kibana_pkcs8.key
¦   ¦   ¦   
¦   ¦   +---config
¦   ¦           kibana.yml
¦   ¦           
¦   +---root
¦           rootCA.crt
¦           rootCA.key
¦           rootCA.pem
¦           rootCA.srl

we have done the security scanning of elasticsearch-8.14.0. And kibana I just selected the
8.5.1 (no reason to it). Please let me know if there is a compatible version of kibana with 8.14.0 , ll try checking with that version.

As kibana is server as well as client. In the elasticsearch.yaml or kibana.yaml do we have to pass kibana.crt and kibana.key file ?

Thanks

Hi @Rupavathi

No you do not...

I wanted to see..

curl -v -k --cacert..... <<<< THIS

To see the verbose -v .. WITH the -k that will tell us something more.

In the end, there is something not OK with your CA with working with Elasticsearch. Elasticsearch and Kibana really do not do anything special with CAs and Certs.

If curl does not work with --cacert Kibana will not work, and none of the other client connections will work either...

Here is a sample docker compose and .env that works...
It has minimal setup and it works no passing of the certs and keys
Uses the --cacert to validate etc.

.env

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=mypassword

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=mypassword

# Version of Elastic products
STACK_VERSION=8.17.3

# Set the cluster name
CLUSTER_NAME=docker-cluster

# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200

# Port to expose Kibana to the host
KIBANA_PORT=5601

KB_ENCRYPTIONKEY=1234-5678-9012-3456-9999-8888-1111-0000

# Increase or decrease based on the available host memory (in bytes)
# 1GB
MEM_LIMIT=1073741824 

# For discuss-single
ES_MEM_LIMIT=1073741824 
KB_MEM_LIMIT=1073741824 

# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject

docker-compose-single.yml

# version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - discovery.type=single-node
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${ES_MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
      - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${KB_ENCRYPTIONKEY}
    mem_limit: ${KB_MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  kibanadata:
    driver: local

docker compose -f docker-compose-single.yml up

I dont know exactly where you are going wrong, but I think it's in more than one place. Mixing config from the environment section and the actual elasticsearch.yml config file seems like a bad idea, I'm not sure the mounts with version number in the path are right, and for sure the elasticsearch.yml file is mounted in wrong place.

I tried to simplify things and it worked pretty quickly. Just creating a working Elasticsearch instance with the generated keys/CA,

This is my directory tree

$ tree -a .
.
├── create-stuff
├── docker-compose
│   ├── docker-compose.yml
│   └── .env
├── elasticsearch
│   ├── config
│   │   └── elasticsearch.yml
│   ├── elasticsearch.crt
│   ├── elasticsearch.csr
│   └── elasticsearch.key
├── extfile_elasticsearch.cnf
├── extfile_kibana.cnf
└── root
    ├── rootCA.crt
    ├── rootCA.key
    ├── rootCA.pem
    └── rootCA.srl

I created the keys with these commands from create-stuff

$ cat create-stuff
subj="/C=IN/ST=STATE/L=CITY/O=COMPANY/OU=UNIT/CN=COMAPANY.com"
#
# root
#
mkdir -p root
openssl genpkey -aes256 -algorithm RSA -out root/rootCA.key -pkeyopt rsa_keygen_bits:4096
openssl req -x509 -new -nodes  -key root/rootCA.key -sha256 -days 3650 -out root/rootCA.crt -subj "${subj}"
cat root/rootCA.crt root/rootCA.key > root/rootCA.pem
#
# elasticsearch
#
mkdir -p elasticsearch
openssl genpkey -algorithm RSA -out elasticsearch/elasticsearch.key
openssl req -new -key elasticsearch/elasticsearch.key -out elasticsearch/elasticsearch.csr -subj "${subj}"
openssl x509 -req -in elasticsearch/elasticsearch.csr -CA root/rootCA.pem -CAkey root/rootCA.key -CAcreateserial -out elasticsearch/elasticsearch.crt -days 365 -sha256 -extfile extfile_elasticsearch.cnf

I used a simplified docker-compose file

$ cat docker-compose/docker-compose.yml
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.14.0
    container_name: elasticsearch
    hostname: elasticsearch
    networks:
      - elastic_net
    environment:
      - discovery.type=single-node
      - ELASTIC_PASSWORD=$ELASTIC_PASSWORD
    ports:
      - "9200:9200"
    volumes:
      - /home/kevin/discuss-thread-home/elastic-data:/elasticsearch/data
      - /home/kevin/discuss-thread-home/root/rootCA.crt:/usr/share/elasticsearch/config/root/rootCA.crt
      - /home/kevin/discuss-thread-home/elasticsearch/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch/elasticsearch.crt
      - /home/kevin/discuss-thread-home/elasticsearch/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch/elasticsearch.key
      - /home/kevin/discuss-thread-home/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

networks:
  elastic_net:
    driver: bridge

volumes:
  elastic-data:

the password is in the .env file

$ cat docker-compose/.env
ELASTIC_PASSWORD=xxx

The elasticsearch.yml file is mounted at right place and contains

$ cat elasticsearch/config/elasticsearch.yml
# Enable security features
xpack.security.enabled: true

xpack.security.transport.ssl:
  enabled: true
  certificate: /usr/share/elasticsearch/config/elasticsearch/elasticsearch.crt
  key: /usr/share/elasticsearch/config/elasticsearch/elasticsearch.key
  certificate_authorities: /usr/share/elasticsearch/config/root/rootCA.crt

xpack.security.http.ssl:
  enabled: true
  certificate: /usr/share/elasticsearch/config/elasticsearch/elasticsearch.crt
  key: /usr/share/elasticsearch/config/elasticsearch/elasticsearch.key
  certificate_authorities: /usr/share/elasticsearch/config/root/rootCA.crt

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

and curl now works with the --cacert (and not needing the -k) option

$ curl -v --cacert ./root/rootCA.crt -u elastic:xxx https://localhost:9200
* Host localhost:9200 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:9200...
* Connected to localhost (::1) port 9200
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: ./root/rootCA.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS
* ALPN: server did not agree on a protocol. Uses default.
* Server certificate:
*  subject: C=IN; ST=STATE; L=CITY; O=COMPANY; OU=UNIT; CN=COMAPANY.com
*  start date: Mar 24 16:09:02 2025 GMT
*  expire date: Mar 24 16:09:02 2026 GMT
*  subjectAltName: host "localhost" matched cert's "localhost"
*  issuer: C=IN; ST=STATE; L=CITY; O=COMPANY; OU=UNIT; CN=COMAPANY.com
*  SSL certificate verify ok.
*   Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
*   Certificate level 1: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/1.x
* Server auth using Basic with user 'elastic'
> GET / HTTP/1.1
> Host: localhost:9200
> Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==
> User-Agent: curl/8.5.0
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/1.1 200 OK
< X-elastic-product: Elasticsearch
< content-type: application/json
< content-length: 541
<
{
  "name" : "elasticsearch",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "CTbrfA0_Q3W0VBtZPMz0mw",
  "version" : {
    "number" : "8.14.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "8d96bbe3bf5fed931f3119733895458eab75dca9",
    "build_date" : "2024-06-03T10:05:49.073003402Z",
    "build_snapshot" : false,
    "lucene_version" : "9.10.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
* Connection #0 to host localhost left intact

I leave kibana integration as an exercise for the reader.

Selected from where? That's a pretty depressing admission by the way. You were not prepared to put a bit more effort and research in? Thats pretty poor IMO.

1 Like

While creating a elasticsearch.crt/elasticsearch.key file have you given any passphrase/password, If so do we have to pass the passphrase/password in elasticsearch.yaml file.

Yes I used a passphrase when creating my rootCA and no you don’t have to include the passphrase in the yml file.