Hello, Thank you for your reply
Let's simulate the situation in a clean environment
I have a Ubuntu Desktop 22.04 with IP address: 192.168.17.180
I installed elasticsearch from elasticsearch-8.14.0-amd64.deb I downloaded few days ago. based on the instruction Here
sudo dpkg -i elasticsearch-8.14.0-amd64.deb
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service
Curl: https://192.168.17.180:9200
{
"name" : "ubuntulvm",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "cYRpPxFGQ4mS_YOqNN2y5A",
"version" : {
"number" : "8.14.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "8d96bbe3bf5fed931f3119733895458eab75dca9",
"build_date" : "2024-06-03T10:05:49.073003402Z",
"build_snapshot" : false,
"lucene_version" : "9.10.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
then edited elasticsearch.yml (every time I installed elasticsearch and before starting it, edited the elasticsearch.yml i got error blow:)
{"error":{"root_cause":[{"type":"status_exception","reason":"Cluster state has not been recovered yet, cannot write to the [null] index"}],"type":"authentication_processing_error","reason":"failed to promote the auto-configured elastic password hash","caused_by":{"type":"status_exception","reason":"Cluster state has not been recovered yet, cannot write to the [null] index"}},"status":503}
but when elasticsearch started one time and then I edited the yml file, it's ok
cluster.name: my-application
node.name: node-1
network.host: 192.168.17.180
then:
sudo systemctl restart elasticsearch
Curl: https://192.168.17.180:9200
{
"name" : "node-1",
"cluster_name" : "my-application",
"cluster_uuid" : "cYRpPxFGQ4mS_YOqNN2y5A",
"version" : {
"number" : "8.14.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "8d96bbe3bf5fed931f3119733895458eab75dca9",
"build_date" : "2024-06-03T10:05:49.073003402Z",
"build_snapshot" : false,
"lucene_version" : "9.10.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
Now, Installing Kibana from kibana-8.14.0-amd64.deb based on instruction Here
sudo dpkg -i kibana-8.14.0-amd64.deb
sudo systemctl daemon-reload
sudo systemctl enable kibana.service
then edited kibana.yml
server.host: "0.0.0.0"
then:
sudo systemctl start kibana.service
sudo systemctl status kibana.service
it says: Go to http://0.0.0.0:5601/?code=829206 to get started.
based on instruction i generated elastic token:
bin/elasticsearch-create-enrollment-token -s kibana
at this state, Elasticsearch and kibana are up and working properly
after some time I need to move this PC or VM to another network that the IP address will change
Let's think I just moved the machin and changed the IP to (for example) 192.168.17.170
changed the elasticsearch.yml
network.host: 192.168.17.170
then
sudo systemctl start elasticsearch
curl: https://192.168.17.170:9200
{
"name" : "node-1",
"cluster_name" : "my-application",
"cluster_uuid" : "cYRpPxFGQ4mS_YOqNN2y5A",
"version" : {
"number" : "8.14.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "8d96bbe3bf5fed931f3119733895458eab75dca9",
"build_date" : "2024-06-03T10:05:49.073003402Z",
"build_snapshot" : false,
"lucene_version" : "9.10.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
Elastic is OK
now let's start Kibana:
before that I commented the latest part of kibana.yml
#elasticsearch.hosts: ['https://192.168.17.180:9200']
#elasticsearch.serviceAccountToken: xxxxxxxxx
#elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1718709552933.crt]
#xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, ho>
then started kibana. web page brings the token configuration page:
Go to http://0.0.0.0:5601/?code=077168 to get started.
so I try to generate new token:
at this state I got an error:
15:03:50.251 [main] WARN org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [192.168.17.170]; the server provided a certificate with subject name [CN=ubuntulvm], fingerprint [87bb70919bd177646d9d397f37b3d2b8d9c8604c], no keyUsage and extendedKeyUsage [serverAuth]; the certificate is valid between [2024-06-18T11:02:15Z] and [2026-06-18T11:02:15Z] (current time is [2024-06-18T11:33:50.247651775Z], certificate dates are valid); the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [IP:192.168.17.180,DNS:ubuntulvm,IP:fe80:0:0:0:78b3:783b:b92e:14a3,IP:0:0:0:0:0:0:0:1,IP:127.0.0.1,DNS:localhost]; the certificate is issued by [CN=Elasticsearch security auto-configuration HTTP CA]; the certificate is signed by (subject [CN=Elasticsearch security auto-configuration HTTP CA] fingerprint [d90c5426c909818c197c7270ef00998ff74966b5] {trusted issuer}) which is self-issued; the [CN=Elasticsearch security auto-configuration HTTP CA] certificate is trusted in this ssl context ([xpack.security.http.ssl (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/http.p12, password=<non-empty>, type=PKCS12, algorithm=PKIX}})])
java.security.cert.CertificateException: No subject alternative names matching IP address 192.168.17.170 found
so now I use the Link you gave:
the part: Basic security (Elasticsearch + Kibana)
first, generatde new CA with default name (elastic-stack-ca.p12) and no password
./bin/elasticsearch-certutil ca
previous configuration of elasticsearch.yml is:
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["ubuntulvm"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
so I generate the transport key with the same name "transport.p12"
/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
Please enter the desired output file [elastic-certificates.p12]: transport.p12
with no password
sudo mv transport.p12 /etc/elasticsearch/certs/
sudo cp elastic-stack-ca.p12 /etc/elasticsearch/certs/
chown and chmod both files to become like the previous ones
removing passwords from elasticsearch-keystore because I generated CA and transport without password
sudo ./bin/elasticsearch-keystore remove xpack.security.transport.ssl.truststore.secure_password
sudo ./bin/elasticsearch-keystore remove xpack.security.transport.ssl.keystore.secure_password
systemctl stop kibana.service
sudo systemctl restart elasticsearch.service
the Elastic doesn't start!
Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xeu elasticsearch.service" for details.
reviewing the log file it says:
failed to load SSL configuration [xpack.security.http.ssl] - cannot read configured [PKCS12] keystore (as a truststore) [/etc/elasticsearch/certs/http.p12] because the file does not exist
because I moved the old certs folder to certs-old and mkdir certs and placed elastic-stack-ca.p12 and transport.p12 inside it
so based on the part two of the link you provided: Basic security plus secured HTTPS traffic (Elastic Stack)
sudo ./bin/elasticsearch-certutil http
Generate a CSR? [y/N]N
Use an existing CA? [y/N]y
CA Path: /etc/elasticsearch/certs/elastic-stack-ca.p12
no password
5y
Generate a certificate per node? [y/N]N (I have just one node)
Enter all the hostnames that you need, one per line.
When you are done, press <ENTER> once more to move on to the next step.
node-1
Enter all the IP addresses that you need, one per line.
When you are done, press <ENTER> once more to move on to the next step.
192.168.17.170
no password for "http.p12"
What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip]
then
sudo mv elasticsearch-ssl-http.zip /etc/elasticsearch/certs/
cd /etc/elasticsearch/certs
unzip elasticsearch-ssl-http.zip
cd elasticsearch
mv http.p12 ../
chmod 660 http.p12
also: (because I didn't give password to http.p12
sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.http.ssl.keystore.secure_password
now starting elasticsearc.service
It started properly and working
so let's start kibana and generate enrollment token
Kibana started, it says go to http ://0.0.0.0:5601/?code=888567
generating enrollment token:
bin/elasticsearch-create-enrollment-token -s kibana
Unable to create enrollment token for scope [kibana]
ERROR: Unable to create an enrollment token. Elasticsearch node HTTP layer SSL configuration Keystore doesn't contain any PrivateKey entries where the associated certificate is a CA certificate, with exit code 73
I don't know what to do after this,
with the link you gave, I could generate CA, transport and http and start elasticsearch, before that I had problem there too.
Please help me complete this process.
Thank you