Below are logs in the ES master pods. BTW, I only added the xpack on the values.yaml file for the master nodes. Maybe I also need to add the xpack on the data.yaml file for the ES data nodes?
{"type": "server", "timestamp": "2021-09-21T14:50:30,188Z", "level": "WARN", "component": "o.e.x.c.s.t.n.SecurityNetty4Transport", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "received plaintext traffic on an encrypted channel, closing connection Netty4TcpChannel{localAddress=/10.0.8.224:9300, remoteAddress=/10.0.9.24:48224, profile=default}", "cluster.uuid": "3vMs0RouRtWdD4tiT_wAWQ", "node.id": "n8bkpWAEQJm1LIKFx8rhwQ"  }
kubectl describe pod elasticsearch-master-0 -n elk
Name:               elasticsearch-master-0
Namespace:          elk
Priority:           0
PriorityClassName:  
Node:               aks-elkmstrpool-22444732-vmss000002/10.0.8.223
Start Time:         Tue, 21 Sep 2021 09:42:24 -0500
Labels:             app=elasticsearch-master
chart=elasticsearch
controller-revision-hash=elasticsearch-master-7f84d46f56
release=elasticsearch
statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations:        configchecksum=a9664c8bf0fb650cba59aa2de53d188e5f0ca9b3cb2dd1de073468cca66cebe
Status:             Running
IP:                 10.0.8.224
Controlled By:      StatefulSet/elasticsearch-master
Init Containers:
configure-sysctl:
Container ID:  containerd://6e89728b8dd3bde7ce0d8d4bf5f0cbfcbd72b8350420caf753ae1287a81a3a53
Image:         docker.elastic.co/elasticsearch/elasticsearch:7.13.2
Image ID:      docker.elastic.co/elasticsearch/elasticsearch@sha256:1cecc2c7419a4f917a88c83180335bd491d623f28ac43ca7e0e69b4eca25fbd5
Port:          
Host Port:     
Command:
sysctl
-w
vm.max_map_count=262144
State:          Terminated
Reason:       Completed
Exit Code:    0
Started:      Tue, 21 Sep 2021 09:42:25 -0500
Finished:     Tue, 21 Sep 2021 09:42:25 -0500
Ready:          True
Restart Count:  0
Environment:    
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4qcbv (ro)
Containers:
elasticsearch:
Container ID:   containerd://0b76bc77dee4ec447b73864312d689bbe9e83ae4c50e0992810c75206e534f84
Image:          docker.elastic.co/elasticsearch/elasticsearch:7.13.2
Image ID:       docker.elastic.co/elasticsearch/elasticsearch@sha256:1cecc2c7419a4f917a88c83180335bd491d623f28ac43ca7e0e69b4eca25fbd5
Ports:          9200/TCP, 9300/TCP
Host Ports:     0/TCP, 0/TCP
State:          Running
Started:      Tue, 21 Sep 2021 09:42:26 -0500
Ready:          False
Restart Count:  0
Limits:
cpu:     1
memory:  2Gi
Requests:
cpu:      100m
memory:   500Mi
Readiness:  exec [sh -c #!/usr/bin/env bash -e
If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
Disable nss cache to avoid filling dentry cache when calling curl
This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$@" $args
fi
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi
curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' ${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
exit ${RC}
fi
ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' ${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
] delay=10s timeout=5s period=10s #success=3 #failure=3
Environment:
node.name:                     elasticsearch-master-0 (v1:metadata.name)
cluster.initial_master_nodes:  elasticsearch-master-0,elasticsearch-master-1,
discovery.seed_hosts:          elasticsearch-master-headless
cluster.name:                  elasticsearch
network.host:                  0.0.0.0
node.data:                     false
node.ingest:                   true
node.master:                   true
node.ml:                       true
node.remote_cluster_client:    true
ELASTIC_PASSWORD:              <set to the key 'password' in secret 'elastic-credentials'>  Optional: false
ELASTIC_USERNAME:              <set to the key 'username' in secret 'elastic-credentials'>  Optional: false
Mounts:
/usr/share/elasticsearch/config/certs from elastic-certificates (rw)
/usr/share/elasticsearch/config/elasticsearch.yml from esconfig (rw)
/usr/share/elasticsearch/data from elasticsearch-master (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4qcbv (ro)
Conditions:
Type              Status
Initialized       True
Ready             False
ContainersReady   False
PodScheduled      True
Volumes:
elasticsearch-master:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  elasticsearch-master-elasticsearch-master-0
ReadOnly:   false
elastic-certificates:
Type:        Secret (a volume populated by a Secret)
SecretName:  elastic-certificates
Optional:    false
esconfig:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      elasticsearch-master-config
Optional:  false
default-token-4qcbv:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-4qcbv
Optional:    false
QoS Class:       Burstable
Node-Selectors:  nodepool=elkmaster
Tolerations:     dedicated=elkmaster:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type     Reason     Age                From                                          Message
Normal   Scheduled                                                          Successfully assigned elk/elasticsearch-master-0 to aks-elkmstrpool-22444732-vmss000002
Normal   Pulled     11m                kubelet, aks-elkmstrpool-22444732-vmss000002  Container image "docker.elastic.co/elasticsearch/elasticsearch:7.13.2" already present on machine
Normal   Created    11m                kubelet, aks-elkmstrpool-22444732-vmss000002  Created container configure-sysctl
Normal   Started    11m                kubelet, aks-elkmstrpool-22444732-vmss000002  Started container configure-sysctl
Normal   Pulled     11m                kubelet, aks-elkmstrpool-22444732-vmss000002  Container image "docker.elastic.co/elasticsearch/elasticsearch:7.13.2" already present on machine
Normal   Created    11m                kubelet, aks-elkmstrpool-22444732-vmss000002  Created container elasticsearch
Normal   Started    11m                kubelet, aks-elkmstrpool-22444732-vmss000002  Started container elasticsearch
Warning  Unhealthy  1m (x60 over 11m)  kubelet, aks-elkmstrpool-22444732-vmss000002  Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )