Enable xpack on a running Elastic cluster

I deployed Elastic using helm chart (7.13) without xpack enabled. The ES and Kibana pods are running fine. I logged in to one of the master pods and ran the command "Elasticsearch-certutil" to generate the certs and then created a secret with the certs. Then I enabled xpack for the internode TLS encrypted communication. I set the following in the values.yaml file. But after ran "helm upgrade", I see the master pod is in not ready state. I have two master pods. One is in not ready state, the other is in ready state but not restarted. If I manaually restarted the other pod, then both pods are in not ready state. Is this a bug in Elastic or did I miss anything?

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/Elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/Elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.keystore.password:
xpack.security.transport.ssl.truststore.password:
extraEnvs:

  • name: ELASTIC_PASSWORD
    valueFrom:
    secretKeyRef:
    name: elastic-credentials
    key: password
  • name: ELASTIC_USERNAME
    valueFrom:
    secretKeyRef:
    name: elastic-credentials
    key: username
    secretMounts:
    • name: elastic-certificates
      secretName: elastic-certificates
      path: /usr/share/Elasticsearch/config/certs

Thanks!

What do the Elasticsearch logs show?

Below are logs in the ES master pods. BTW, I only added the xpack on the values.yaml file for the master nodes. Maybe I also need to add the xpack on the data.yaml file for the ES data nodes?

{"type": "server", "timestamp": "2021-09-21T14:50:30,188Z", "level": "WARN", "component": "o.e.x.c.s.t.n.SecurityNetty4Transport", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "received plaintext traffic on an encrypted channel, closing connection Netty4TcpChannel{localAddress=/10.0.8.224:9300, remoteAddress=/10.0.9.24:48224, profile=default}", "cluster.uuid": "3vMs0RouRtWdD4tiT_wAWQ", "node.id": "n8bkpWAEQJm1LIKFx8rhwQ" }

kubectl describe pod elasticsearch-master-0 -n elk
Name: elasticsearch-master-0
Namespace: elk
Priority: 0
PriorityClassName:
Node: aks-elkmstrpool-22444732-vmss000002/10.0.8.223
Start Time: Tue, 21 Sep 2021 09:42:24 -0500
Labels: app=elasticsearch-master
chart=elasticsearch
controller-revision-hash=elasticsearch-master-7f84d46f56
release=elasticsearch
statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations: configchecksum=a9664c8bf0fb650cba59aa2de53d188e5f0ca9b3cb2dd1de073468cca66cebe
Status: Running
IP: 10.0.8.224
Controlled By: StatefulSet/elasticsearch-master
Init Containers:
configure-sysctl:
Container ID: containerd://6e89728b8dd3bde7ce0d8d4bf5f0cbfcbd72b8350420caf753ae1287a81a3a53
Image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
Image ID: docker.elastic.co/elasticsearch/elasticsearch@sha256:1cecc2c7419a4f917a88c83180335bd491d623f28ac43ca7e0e69b4eca25fbd5
Port:
Host Port:
Command:
sysctl
-w
vm.max_map_count=262144
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 21 Sep 2021 09:42:25 -0500
Finished: Tue, 21 Sep 2021 09:42:25 -0500
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4qcbv (ro)
Containers:
elasticsearch:
Container ID: containerd://0b76bc77dee4ec447b73864312d689bbe9e83ae4c50e0992810c75206e534f84
Image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
Image ID: docker.elastic.co/elasticsearch/elasticsearch@sha256:1cecc2c7419a4f917a88c83180335bd491d623f28ac43ca7e0e69b4eca25fbd5
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 21 Sep 2021 09:42:26 -0500
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 2Gi
Requests:
cpu: 100m
memory: 500Mi
Readiness: exec [sh -c #!/usr/bin/env bash -e

If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )

Once it has started only check that the node itself is responding

START_FILE=/tmp/.es_start_file

Disable nss cache to avoid filling dentry cache when calling curl

This is required with Elasticsearch Docker using nss < 3.52

export NSS_SDB_USE_CACHE=no

http () {
local path="${1}"
local args="${2}"
set -- -XGET -s

if [ "$args" != "" ]; then
set -- "$@" $args
fi

if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi

curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}"
}

if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' ${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
exit ${RC}
fi

ready if HTTP code 200, 503 is tolerable if ES version is 6.x

if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' ${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi

else
echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
] delay=10s timeout=5s period=10s #success=3 #failure=3
Environment:
node.name: elasticsearch-master-0 (v1:metadata.name)
cluster.initial_master_nodes: elasticsearch-master-0,elasticsearch-master-1,
discovery.seed_hosts: elasticsearch-master-headless
cluster.name: elasticsearch
network.host: 0.0.0.0
node.data: false
node.ingest: true
node.master: true
node.ml: true
node.remote_cluster_client: true
ELASTIC_PASSWORD: <set to the key 'password' in secret 'elastic-credentials'> Optional: false
ELASTIC_USERNAME: <set to the key 'username' in secret 'elastic-credentials'> Optional: false
Mounts:
/usr/share/elasticsearch/config/certs from elastic-certificates (rw)
/usr/share/elasticsearch/config/elasticsearch.yml from esconfig (rw)
/usr/share/elasticsearch/data from elasticsearch-master (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4qcbv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
elasticsearch-master:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-master-elasticsearch-master-0
ReadOnly: false
elastic-certificates:
Type: Secret (a volume populated by a Secret)
SecretName: elastic-certificates
Optional: false
esconfig:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: elasticsearch-master-config
Optional: false
default-token-4qcbv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4qcbv
Optional: false
QoS Class: Burstable
Node-Selectors: nodepool=elkmaster
Tolerations: dedicated=elkmaster:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled Successfully assigned elk/elasticsearch-master-0 to aks-elkmstrpool-22444732-vmss000002
Normal Pulled 11m kubelet, aks-elkmstrpool-22444732-vmss000002 Container image "docker.elastic.co/elasticsearch/elasticsearch:7.13.2" already present on machine
Normal Created 11m kubelet, aks-elkmstrpool-22444732-vmss000002 Created container configure-sysctl
Normal Started 11m kubelet, aks-elkmstrpool-22444732-vmss000002 Started container configure-sysctl
Normal Pulled 11m kubelet, aks-elkmstrpool-22444732-vmss000002 Container image "docker.elastic.co/elasticsearch/elasticsearch:7.13.2" already present on machine
Normal Created 11m kubelet, aks-elkmstrpool-22444732-vmss000002 Created container elasticsearch
Normal Started 11m kubelet, aks-elkmstrpool-22444732-vmss000002 Started container elasticsearch
Warning Unhealthy 1m (x60 over 11m) kubelet, aks-elkmstrpool-22444732-vmss000002 Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )

Yes, it needs to be done on all nodes.

Please also format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

Thank you Mark for the quick response! After I added the xpack on the ES data yaml file and deployed it, both the master pods and data pods are running and in ready state.

I have another question here. You know I first deployed the ES helm chart without xpack enabled, And I was able to login to one of the master pods and ran the Elasticsearch-certutil to generate the certs. What if I want to enable xpack at the first place, is there a way to generate the certs without logging to the master pod?

Thanks!

I am not sure how to do that with helm sorry :frowning:

Thank you again for the quick response! As you see in the values.yaml file, I have the following lines:
xpack.security.transport.ssl.keystore.password: <passwd>
xpack.security.transport.ssl.truststore.password: <passwd>

This is not secure. So I logged in to the ES pods and ran the following commands to store the keystore and truststore passwords :

./elasticcsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./elasticcsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

Then reran the helm deployment. After the reran the helm deployment. the ES pods are all crashed. The log shows the following. I think the reason is rerunning helm wiped off the passwd stored in the Elasticsearch-keystore. So the question is how can I store the keystore/truststore password? Maybe create a secret and use secret mount in the values.yaml file?

Here are the error from the ES log:

ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]; nested: ElasticsearchException[failed to initialize SSL TrustManager]; nested: IOException[keystore password was incorrect]; nested: UnrecoverableKeyException[failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.];
Likely root cause: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.

I ran the command "./elasticcsearch-keystore add xpack.security.transport.ssl.keystore.secure_password", then
I created a secret called "eskeystore" based on the Elasticsearch.keystore file and then upadted the values.yaml file with the following:

keystore:
- secretName: eskeystore

after restarted the ES nodes, I see the following errors:

java.lang.IllegalArgumentException: unknown secure setting [Elasticsearch.keystore] please check that any required plugins are installed, or check the breaking changes documentation for removed settings

the ES pod shows the following:

Mounts:
/usr/share/Elasticsearch/config/Elasticsearch.keystore from keystore (rw)
.....

Volumes:
keystore:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
keystore-eskeystore:
Type: Secret (a volume populated by a Secret)
SecretName: eskeystore
Optional: false
.....

So why the volume of "keystore" is empty dir ? Did I miss anything?

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

It'd also be good to show your full commands (with redactions on passwords), as well as the output of those, and the entire Elasticsearch log.

Hi Mark,

I post another topic which requires to solve first, then we can get back to this one.
Could you please take a look at the following topic?

Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.