Make .p12 to .pem for kibana

Hi,
i have a question for continue my work

I'm using ES 7.1, Kb 7.1

I want to encrypting kibana communication for using https

I was using .p12(pkcs#12) in elasticsearch, but kibana can't support that.
so i need to .pem format, but i don't know how to convert .p12 to .pem

how can i convert that??
I'm sorry, my english is not good
Thanks

1 Like

You need to export the CA certificate from the pkcs#12 in order to use that in kibana's elasticsearch.ssl.certificateAuthorities.

Assuming your PKCS#12 is named elastic-certificates.p12 , you can use

openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -out elastic-ca.pem

and use elastic-ca.pem in Kibana.

2 Likes

Thanks, i was try, but i have an error, kibana can't find elasticsearch

Working process
openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -out elastic-ca.pem
/etc/elasticsearch/
image
/etc/kibana/
image

In kibana.yml
image

This is Error Message
image

If you need more info, just tell me please

Please don't post images of text as they are hard to read, may not display
correctly for everyone, and not searchable.

Instead paste the text and format it with </> icon, and check the preview
window to make sure it's properly formatted before posting it. This makes it
more likely that your question will receive a useful answer.

What is your configuration in elasticsearch.yml ?
Is your elasticsearch node running ? Are there any logs in elasticsearch.log that indicate an error ?

It would be great if you could update your post to solve this.

Sorry, i didn't know that.
This is my elasticsearch.yml, and my elasticsearch is running when i was change kibana.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
http.host: 192.168.0.92
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.keystore.password: "qlalfqjsgh12!"
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.password: "qlalfqjsgh12!"

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: elastic-certificates.p12
xpack.security.http.ssl.keystore.password: "qlalfqjsgh12!"
xpack.security.http.ssl.truststore.path: elastic-certificates.p12
xpack.security.http.ssl.truststore.password: "qlalfqjsgh12!"

And in my elasticsearch.log nothing about that.. for my sight, It just warn

[2019-05-29T15:34:40,088][WARN ][o.e.h.AbstractHttpServerTransport] [localhost.localdomain] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/192.168.0.92:9200, remoteAddress=/192.168.0.92:44374}
java.io.IOException

Please use the </> button or backticks (```) to format your post when it contains long snippets of code or configuration, as it's really hard to read otherwise :slight_smile: You can check the previw window on the right when you write your post to see how it looks.

Please verify that the certificate has been exported correctly, share the output ( again , use the </> button or backticks ) of

cat /etc/kibana/elastic-ca.pem
ls -l /etc/kibana/elastic-ca.pem
openssl x509 -in elastic-ca.pem -text -noout

If the file (elastic-ca.pem) looks empty, please try to export it again using

openssl pkcs12 -in elastic-certificates.p12 -clcerts -nokeys -out elastic-ca.pem

(this is slightly different than the one I shared earlier , it contains -clcerts instead of -cacerts now ) and try again.

when i was try third sentense -
openssl x509 -in elastic-ca.pem -text -noout

it's not empty
it have serial number and public key in that file

ah.... i have a password in my elastic-certificates.p12 file
Is it make trouble??

No, Kibana doesn't read the p12 file eitherway.

Are you sure there is nothing more in the kibana logs ? Try setting logging.verbose: true in kibana.yml and restart kibana. Then share the logs from kibana please

I'm so sorry.. you means

in kibana.yml

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: true

and restart kibana.

kibana logs?? is it different /var/log/elasticsearch/elasticsearch.log??

i was restart 3 times but it doesn't exits kibana.log in /var/log - folder

by default kibana only logs to stdout, but it will be easier to get all logs in a file for troubleshooting so set

logging.dest: /var/log/kibana.log

in kibana.yml and restart kibana

When i was change

logging.dest: /var/log/kibana.log

it makes error

[root@localhost kibana]# systemctl status kibana -l
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since 수 2019-05-29 17:07:25 KST; 10s ago
Process: 11936 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 11936 (code=exited, status=1/FAILURE)

5월 29 17:07:25 localhost.localdomain systemd[1]: Unit kibana.service entered failed state.
5월 29 17:07:25 localhost.localdomain systemd[1]: kibana.service failed.
5월 29 17:07:25 localhost.localdomain systemd[1]: kibana.service holdoff time over, scheduling restart.
5월 29 17:07:25 localhost.localdomain systemd[1]: Stopped Kibana.
5월 29 17:07:25 localhost.localdomain systemd[1]: start request repeated too quickly for kibana.service
5월 29 17:07:25 localhost.localdomain systemd[1]: Failed to start Kibana.
5월 29 17:07:25 localhost.localdomain systemd[1]: Unit kibana.service entered failed state.
5월 29 17:07:25 localhost.localdomain systemd[1]: kibana.service failed.

I'm so confuse. It doesn't work for me. what's problem to me.. haha

Lets focus in the original problem. Remove

logging.dest: /var/log/kibana.log

and then start kibana and check for errors.

Okey, remove that. and restart kibana
and then status is like this

[root@localhost kibana]# systemctl status kibana -l
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since 수 2019-05-29 17:14:17 KST; 8s ago
Main PID: 12061 (node)
Tasks: 11
CGroup: /system.slice/kibana.service
└─12061 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

5월 29 17:14:23 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:23Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"No living connections"}
5월 29 17:14:23 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:23Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"Unable to revive connection: https://192.168.0.92:9200/"}
5월 29 17:14:23 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:23Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"No living connections"}
5월 29 17:14:23 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:23Z","tags":["reporting","warning"],"pid":12061,"message":"Could not retrieve cluster settings, because of No Living connections"}
5월 29 17:14:23 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:23Z","tags":["warning","task_manager"],"pid":12061,"message":"PollError No Living connections"}
5월 29 17:14:23 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:23Z","tags":["warning","maps"],"pid":12061,"message":"Error scheduling telemetry task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized!"}
5월 29 17:14:23 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:23Z","tags":["warning","telemetry"],"pid":12061,"message":"Error scheduling task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized!"}
5월 29 17:14:24 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:24Z","tags":["debug","legacy-proxy"],"pid":12061,"message":""getConnections" has been called."}
5월 29 17:14:25 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:25Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"Unable to revive connection: https://192.168.0.92:9200/"}
5월 29 17:14:25 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:14:25Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"No living connections"}

i have a question for here why machine use admin user??

["warning","elasticsearch","admin"]

before i was using this

$ /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto

And i was folloe this eLearning
Fundamentals of Securing Elasticsearch

This is not what that means. admin here is just a log tag used for clarity, it's not a user

Try

journalctl -u kibana.service -b

it should give you all logs

okey, i tried

journalctl -u kibana.service -b

so.. how can i show you??
it can't make a file??

5월 29 17:35:30 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:30Z","tags":["debug","legacy-proxy"],"pid":12061,"message":""getConnections" has been called."}
5월 29 17:35:31 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:31Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"Unable to revive connection: https://192.168.0.92:9200/"}
5월 29 17:35:31 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:31Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"No living connections"}
5월 29 17:35:32 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:32Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"Unable to revive connection: https://192.168.0.92:9200/"}
5월 29 17:35:32 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:32Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"No living connections"}
5월 29 17:35:32 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:32Z","tags":["warning","task_manager"],"pid":12061,"message":"PollError No Living connections"}
5월 29 17:35:33 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:33Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"Unable to revive connection: https://192.168.0.92:9200/"}
5월 29 17:35:33 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:33Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"No living connections"}
5월 29 17:35:35 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:35Z","tags":["debug","legacy-proxy"],"pid":12061,"message":""getConnections" has been called."}
5월 29 17:35:35 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:35Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"Unable to revive connection: https://192.168.0.92:9200/"}
5월 29 17:35:35 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:35Z","tags":["warning","elasticsearch","admin"],"pid":12061,"message":"No living connections"}
5월 29 17:35:35 localhost.localdomain kibana[12061]: {"type":"log","@timestamp":"2019-05-29T08:35:35Z","tags":["warning","task_manager"],"pid":12061,"message":"PollError No Living connections"}

when i was make a CA files i was set password, it doesn't matter??

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.