How to properly use Publicly signed Certifiate in kibana to communicate witih elastic?

My elasticsearch.yml configuration
  enabled: true
  keystore.path: certs/certificates.p12

Bellow procedure has been followed to create certificate.p12

> cat private-key.key certificate.crt > cert_and_key.pem
> openssl pkcs12 -export -in cert_and_key.pem -out certificates.p12

Bellow command works fine to test the SSL certificate to Elasticsearch

curl --cacert /etc/elasticsearch/certs/certificate.crt -u elastic:+NP2Lg47d42NE

So here we have 3 files which has been signed by Alphassl .


So I converted alphasslrootcabundle.crt to pem by using

openssl x509 -in alphasslrootcabundle.crt -out alphasslrootcabundle.crt.pem -outform PEM

bellow is my kibana.yml configuration

**# =================== System: Elasticsearch (Optional) ===================**
**# These files are used to verify the identity of Kibana to Elasticsearch and are required when**
**# in Elasticsearch is set to required.**
**elasticsearch.ssl.certificate: /etc/kibana/certs/certficate.crt**
**elasticsearch.ssl.key: /etc/kibana/certs/private-key.key**

**# Enables you to specify a path to the PEM file for the certificate**
**# authority for your Elasticsearch instance.**
**elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/alphasslrootcabundle.pem**

But still getting bellow errror

Oct 14 22:15:21 drsite kibana[92231]: [2023-10-14T22:15:20.999+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. unable to get issuer certificate

I need a help to understand which certificate files i should use in which locaiton in kibana to avoid that error

Thanks for the help




You normally do not need these unless you're enforcing client authentication on the Elasticsearch side.

What version?

Can you show your entire Kibana.yml please?

Also, when you looked at the kibana logs, did you see any other errors?

Hi, i think i typed wrong above about "alphasslrootcabundle.crt.pem"
please find bellow my actual file.

# For more configuration options see the configuration guide for Kibana in

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes. "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

xpack.encryptedSavedObjects.encryptionKey: 3a1ee5387ad2822875d90b209414faaa
xpack.reporting.encryptionKey: 4640a4ae2e013a7c5da80f9bdc80b42e d236d83417c02764baf855b57365932c

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: [""]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "elastic"
#elasticsearch.password: "xxxxxx"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
elasticsearch.serviceAccountToken: "eyJ2ZXIiOiI4LjEwLjMiLCJhZHIiOlsiODIuMTY1LjIwNS4xNjA6OTIwMCJdLCJmZ3IiOiJkNGU1NzlmNTk4MjUyOWMxNDBkNmRlNGRkNmExMzVlNDFiYjI3NjI1MTM2YWFlNjFlOTBjM2EyNzA1NDliYzAwIiwia2V5IjoiM2RjMEs0c0JMZHVLQmtuU3loeFA6NnZ6SEZLc3NRQTJiVnRCcnhzRWRldyJ9"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# in Elasticsearch is set to required.
elasticsearch.ssl.certificate: /etc/kibana/certs/
elasticsearch.ssl.key: /etc/kibana/certs/

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/alphasslrootcabundle.pem

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
      type: file
      fileName: /var/log/kibana/kibana.log
        type: json
      - default
      - file
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data data

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000 "Y8QQpB7WEK6lpyTPnrauHNyaxALI2eLh"

Bellow is my elasticsearch.yaml

# ======================== Elasticsearch Configuration =========================
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
# Please consult the documentation for further information on configuration options:
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
# my-application
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
# node-1
# Add custom attributes to the node:
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
# /var/lib/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
#bootstrap.memory_lock: true
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
# Elasticsearch performs poorly when the system is swapping the memory.
# ---------------------------------- Network -----------------------------------
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
http.port: 9200
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["", "[::1]"]
#discovery.seed_hosts: ["host1", "host2"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
#cluster.initial_master_nodes: ["node-1", "node-2"]
# For more information, consult the discovery and cluster formation module documentation.
# ---------------------------------- Various -----------------------------------
# Allow wildcard deletion of indices:
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 13-10-2023 22:26:47
# --------------------------------------------------------------------------------

# Enable security features true true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
  enabled: true
  keystore.path: certs/certificates.p12
  # Paths to the certificate and private key files

# Enable encryption and mutual authentication between cluster nodes
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["drsite"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

I am using public domain, so Certificate has been signed by public authority.

root@drsite:/etc/kibana/certs# curl --cacert /etc/elasticsearch/certs/ -u elastic:xxxxxx

  "name" : "drsite",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "YDr9BOEFTFiIugCUz4YtPQ",
  "version" : {
    "number" : "8.10.3",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "c63272efed16b5a1c25f3ce500715b7fddf9a9fb",
    "build_date" : "2023-10-05T10:15:55.152563867Z",
    "build_snapshot" : false,
    "lucene_version" : "9.7.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  "tagline" : "You Know, for Search"

Comment these out in kibana.yml

What version?

Did you see any other errors?

Try setting that exactly the same as your curl... It should be the same

hi thanks,
i tryed both bellow combination.. still same error

root@drsite:/etc/kibana/certs# cat /etc/kibana/kibana.yml | grep elasticsearch.ssl.certificateAuthorities
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/
root@drsite:/etc/kibana/certs# cat /etc/kibana/kibana.yml | grep elasticsearch.ssl.certificateAuthorities
elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/

Thanks for the help

if i look at the documentation

elasticsearch.ssl.certificateAuthorities: $KBN_PATH_CONF/elasticsearch-ca.pem

this is self signed certificate , when using public singed certificate, which one we suppose to use ?

What version please.... I am asking for a specific reason.

Your public one... Assuming you use the public certificates on the elasticsearch HTTP SSL setup.

Does the curl work without the cacert?

You should use the one the same one that worked with your curl command

Also, are you running the curl from the Kibana server to make sure there's connectivity to the elasticsearch server...

You can try the curl from the Kibana Server with the elastic user also put -v for verbose ...
If you're using 8.x, you have to use the kibana_system user....
That's one of the reasons I'm asking for the version.

Run that share the results.

Then use the then use the same username and credentials in kibana.yml

Also, you seen any other errors in the logs... It might not be the certificates.

If it's truly signed by a trusted public authority like let's encrypt, you don't even need to include the certificate and authorities

Does the curl run with no CA?

If not then you do have to include it.

I ask because when I create my public's cert from say let's encrypt. I don't need to include the CAs since they are signed by a public trusted authority

What version please.... I am asking for a specific reason.
my Elastic search version is "number" : "8.10.3",

"Does the curl work without the cacert?"
no, i need to pass the cert, see out put bellow
With cacert it working

root@drsite:~# curl --cacert /etc/elasticsearch/certs/ -u elastic:+xxxxxxx*xsloiL
"name" : "drsite",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "YDr9BOEFTFiIugCUz4YtPQ",
"version" : {
"number" : "8.10.3",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "c63272efed16b5a1c25f3ce500715b7fddf9a9fb",
"build_date" : "2023-10-05T10:15:55.152563867Z",
"build_snapshot" : false,
"lucene_version" : "9.7.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
"tagline" : "You Know, for Search"
Bellow without cacert

root@drsite:~# curl -u elastic:+xxxx*xsloiL
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: curl - SSL CA Certificates

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

"Does the curl run with no CA?"
No, its not working ..

I created a CSR, then singed that with public authority with Wildcard domain.

out put with " You can try the curl from the Kibana Server with the elastic user also put -v for verbose"

root@drsite:~#   curl --cacert /etc/elasticsearch/certs/ -u elastic:+NP2Lg4xxxxxxL -v
*   Trying
* Connected to ( port 9200 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/elasticsearch/certs/
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=*
*  start date: Nov  5 10:26:10 2022 GMT
*  expire date: Dec  7 10:26:09 2023 GMT
*  subjectAltName: host "" matched cert's "*"
*  issuer: C=BE; O=GlobalSign nv-sa; CN=AlphaSSL CA - SHA256 - G2
*  SSL certificate verify ok.
* Server auth using Basic with user 'elastic'
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET / HTTP/1.1
> Host:
> Authorization: Basic ZWxhc3RpYzorTlAyTGc0N2Q0Mk5FKnhzbG9pTA==
> User-Agent: curl/7.81.0
> Accept: */*
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-elastic-product: Elasticsearch
< content-type: application/json
< content-length: 530
  "name" : "drsite",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "YDr9BOEFTFiIugCUz4YtPQ",
  "version" : {
    "number" : "8.10.3",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "c63272efed16b5a1c25f3ce500715b7fddf9a9fb",
    "build_date" : "2023-10-05T10:15:55.152563867Z",
    "build_snapshot" : false,
    "lucene_version" : "9.7.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  "tagline" : "You Know, for Search"
* Connection #0 to host left intact

Hi @Fosiul_Alam

Good info thanks

Let's try another test,

Do the curl -v from the kibana server but use the -u kibana_system:laskdjfhasldkfjh user.

If you do not have the kibana_system user and password reset it with the following command from the elasticsearch server.

bin/elasticsearch-reset-password -u kibana_system

Then in the kibana.yml use

elasticsearch.username: "kibana_system"
elasticsearch.password: "xxxxxx"

Comment out

# elasticsearch.serviceAccountToken: "eyJ2ZXI...

I noticed you had

#elasticsearch.username: "elastic"
#elasticsearch.password: "xxxxxx"

This means you may have tried the elastic user, this will not work, you can not use the elastic user to connect kibana to elasticsearch, that is NOT valid in 8.x.

You’ll configure Kibana to use the built-in kibana_system user and the password that you created earlier. Kibana performs some background tasks that require use of the kibana_system user.

the default in the kibana.yml is this, so you need to use kibana_system to connect

#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

So try the curl and then try the kibana_system an password and comment out the service token.

And finally, did you look earlier in the kibana logs when you start it

Unable to retrieve version information from Elasticsearch nodes. unable to get issuer certificate`

This Error usually comes later in the logs, earlier in the log you should see a failed connection attempt or authentication those othe error message would be very useful.

Look for other errors and report them.

ALSO in kibana.yml

Controls the verification of the server certificate that Kibana receives when making an outbound SSL/TLS connection to Elasticsearch. Valid values are "full", "certificate", and "none". Using "full" performs hostname verification, using "certificate" skips hostname verification, and using "none" skips verification entirely. Default: "full"

so you can try

elasticsearch.ssl.verificationMode: none

That should tell us if it is a cert issue or authentication issue...

Then you can work from that.

Hi @stephenb thaks for advise.. this is odd what i get when i execute the command

root@drsite:/etc/kibana/certs# /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system

08:27:22.670 [main] WARN  org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [82.xxxxx.160]; the server provided a certificate with subject name [CN=*], fingerprint [5ecff58b770b927dc095a95f6b145f5a53e5f023], keyUsage [digitalSignature, keyEncipherment] and extendedKeyUsage [serverAuth, clientAuth]; the certificate is valid between [2022-11-05T10:26:10Z] and [2023-12-07T10:26:09Z] (current time is [2023-10-16T08:27:22.666701885Z], certificate dates are valid); the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [DNS:*,]; the certificate is issued by [CN=AlphaSSL CA - SHA256 - G2,O=GlobalSign nv-sa,C=BE] but the server did not provide a copy of the issuing certificate in the certificate chain; this ssl context ([ (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/certificates.p12, password=<empty>, type=PKCS12, algorithm=PKIX}})]) is not configured to trust that issuer but trusts [97] other issuers No subject alternative names matching IP address found
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at org.elasticsearch.common.ssl.DiagnosticTrustManager.checkServerTrusted( ~[?:?]
        at$T13CertificateConsumer.checkServerCerts( ~[?:?]
        at$T13CertificateConsumer.onConsumeCertificate( ~[?:?]
        at$T13CertificateConsumer.consume( ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at org.elasticsearch.xpack.core.common.socket.SocketAccess.lambda$doPrivileged$0( ~[?:?]
        at ~[?:?]
        at org.elasticsearch.xpack.core.common.socket.SocketAccess.doPrivileged( ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute( ~[elasticsearch-8.10.3.jar:8.10.3]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling( ~[elasticsearch-cli-8.10.3.jar:8.10.3]
        at org.elasticsearch.cli.Command.main( ~[elasticsearch-cli-8.10.3.jar:8.10.3]
        at org.elasticsearch.launcher.CliToolLauncher.main( ~[cli-launcher-8.10.3.jar:8.10.3]

per the docs

/usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system --url

Specifies the base URL (hostname and port of the local node) that the tool uses to submit API requests to Elasticsearch. The default value is determined from the settings in your elasticsearch.yml file. If is set to true, you must specify an HTTPS URL.

Also did you look for more errors in the logs? There should be some

Also Did you try

elasticsearch.ssl.verificationMode: none

Thanks @stephenb ,

Actually this command is the main

/usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system --url

and i have use thsi

elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/http_ca.crt" ]

so far what i have done is

a) reinstall elastic and kibana
b) i am using bellow which was created at time of elasticsearch install.
c) run bellow command

/usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system

d) in kibana.ymal

elasticsearch.username: "kibana_system"
elasticsearch.password: "AcV+U=33333+ZQ"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/http_ca.crt" ]

e) Created a proxy server
ProxyPass / http://82.1xxxx160:5601/
ProxyPassReverse / http://82.xxxx.160:5601/

f) now kibana is openning vai proxy perfectly

but only problem is : when i will create a client like metric beat,
i will ahve to use
but with this... still i will have to use https://myip:9200 ..
do you think its secure enough ? or I should retry and try to use my domain instead of IP ?

Hi @Fosiul_Alam,

It is unclear to me why you need a proxy to open Kibana. I am not an expert with proxies, but you should not need a proxy, unless there is something preventing opening Kibana.

Kibana can be bound to the network interface and can be access directly, but if it works in your environment great.

Metricebeat should be able to connect directly to elasticsearch using the elasticsearch HTTPS endpoint and the same http_ca.crt and some form of authentication (U/P or API Key etc)

Metricbeat will want to connect with Kibana to load dashboards it is unclear to me what affect the proxy will have

If you are using Authentication + HTTPS + Certificate Validation ... yes that should be secure as that is the normal security best practices.

What affect on security putting a proxy in, I can not comment on .

Hi @stephenb

Thanks, i just re-creaed the elastic and kibana from scratch, so finally i made it work.

/usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system --url


elasticsearch.ssl.verificationMode: none

So i guess the issues in the Certificate, but atleast it working now .

the reason I am planning to use proxy, because so far i dont know how to use SSL in kibana...

i am not finding good document to explain how to make kibana link ssl so that when i open, that become https...

Thanks for your great help

1 Like

Hi @stephenb
actually you are right, I dont need proxy
I just configrued kibana with SSL , so i can open it without proxy.

so for now i am good
Thanks for all the help !!

1 Like

Did you turn that back to see if it works?
It should .... if it does not you should see a connection error in kibana log

One thought I have seen problems when people use a /etc/host host file to resolve vs "Real" DNS are you using a /etc/host to resolve? The reason is then kibana connects via IP (not via FQDN because the IP is substituted before the connection is established) which then fails the Subject Test for the Cert... but I do not think this is it because your curl works