Readiness probe failed: Error: Got HTTP code 503 but expected a 200

Hello,
I have an issue where Kibana is not coming up due to a failed status check. I'm using the Helm charts for Elasticsearch and Kibana with some changes that are reflected by the values below.
There was a similar issue a while ago (Readiness probe failed: Error: Got HTTP code 503 but expected a 200 · Issue #780 · elastic/helm-charts · GitHub), which was solved by assigning /api/status as the httpCheckPath in Kibana's Helmvalues. This, however, does not work in my case.
Does anyone have an idea what could cause this issue?

Details below:

kubectl version

Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:24:08Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

kibana values.yaml

elasticsearchHosts: "https://redacted"

extraEnvs:
  - name: "NODE_OPTIONS"
    value: "--max-old-space-size=1800"
  - name: 'ELASTICSEARCH_USERNAME'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: username
  - name: 'ELASTICSEARCH_PASSWORD'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: password
  - name: 'KIBANA_ENCRYPTION_KEY'
    valueFrom:
      secretKeyRef:
        name: kibana
        key: encryptionkey



secretMounts:
  - name: elastic-certificates
    secretName: elastic-certificates
    path: /usr/share/kibana/config/certs-gen/


kibanaConfig:
  kibana.yml: |
    server.ssl:
      enabled: true
      key: /usr/share/kibana/config/certs-gen/privkey2.pem
      certificate: /usr/share/kibana/config/certs-gen/fullchain2.pem
    xpack.reporting.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.encryptedSavedObjects.encryptionKey: ${KIBANA_ENCRYPTION_KEY}

protocol: https

service:
  type: NodePort
  loadBalancerIP: ""
  port: 5601
  nodePort: 30002
  labels: {}
  annotations: {}
  loadBalancerSourceRanges: []
  httpPortName: http

healthCheckPath: /api/status # also checked /app/kibana and default

kubectl get pv,pvc,nodes,pods,svc

NAME                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                 STORAGECLASS   REASON   AGE
persistentvolume/elk-data   30Gi       RWO            Retain           Bound    default/elasticsearch-master-elasticsearch-master-0                           51m

NAME                                                                STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0   Bound    elk-data   30Gi       RWO                           51m

NAME               STATUS   ROLES                  AGE   VERSION
node/disposable1   Ready    control-plane,master   54m   v1.23.3

NAME                                    READY   STATUS    RESTARTS   AGE
pod/elasticsearch-master-0              1/1     Running   0          34m
pod/kibana-kibana-79544d8d54-x4smn      0/1     Running   0          67s
pod/nginx-deployment-55784d5d88-mc4tt   1/1     Running   0          51m

NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
service/elasticsearch-master            NodePort    10.97.220.72    <none>        9200:30001/TCP,9300:30786/TCP   34m
service/elasticsearch-master-headless   ClusterIP   None            <none>        9200/TCP,9300/TCP               34m
service/kibana-kibana                   NodePort    10.96.230.182   <none>        5601:30002/TCP                  67s
service/kubernetes                      ClusterIP   10.96.0.1       <none>        443/TCP                         54m
service/nginx-service                   NodePort    10.108.60.203   <none>        80:30000/TCP                    51m

kubectl describe pod/kibana-kibana-79544d8d54-x4smn

Name:         kibana-kibana-79544d8d54-x4smn
Namespace:    default
Priority:     0
Node:         disposable1/redacted
Start Time:   Thu, 17 Feb 2022 10:46:50 +0100
Labels:       app=kibana
              pod-template-hash=79544d8d54
              release=kibana
Annotations:  cni.projectcalico.org/containerID: f5011b7ee549f8b4983e09735bae0fad6584c662e14b558df5cd1bc6ce064839
              cni.projectcalico.org/podIP: 192.168.47.16/32
              cni.projectcalico.org/podIPs: 192.168.47.16/32
              configchecksum: 7ce114df53c5a41b1c4386587d8c9a3b5aebf96f5137051574760a6a72d488e
Status:       Running
IP:           192.168.47.16
IPs:
  IP:           192.168.47.16
Controlled By:  ReplicaSet/kibana-kibana-79544d8d54
Containers:
  kibana:
    Container ID:   containerd://5130c671a35f1f8fbcd1ccd06a6d8a4ae9c047c29e42ee883b246940118b1179
    Image:          docker.elastic.co/kibana/kibana:7.16.3
    Image ID:       docker.elastic.co/kibana/kibana@sha256:6c9867bd8e91737db8fa73ca6f522b2836ed1300bcc31dee96e62dc1e6413191
    Port:           5601/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 17 Feb 2022 10:46:51 +0100
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Readiness:  exec [sh -c #!/usr/bin/env bash -e

# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Kibana Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no

http () {
    local path="${1}"
    set -- -XGET -s --fail -L

    if [ -n "${ELASTICSEARCH_USERNAME}" ] && [ -n "${ELASTICSEARCH_PASSWORD}" ]; then
      set -- "$@" -u "${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD}"
    fi

    STATUS=$(curl --output /dev/null --write-out "%{http_code}" -k "$@" "https://localhost:5601${path}")
    if [[ "${STATUS}" -eq 200 ]]; then
      exit 0
    fi

    echo "Error: Got HTTP code ${STATUS} but expected a 200"
    exit 1
}

http "/api/status" # also checked /app/kibana and default
] delay=10s timeout=5s period=10s #success=3 #failure=3
    Environment:
      ELASTICSEARCH_HOSTS:     https://redacted:30001
      SERVER_HOST:             0.0.0.0
      NODE_OPTIONS:            --max-old-space-size=1800
      ELASTICSEARCH_USERNAME:  <set to the key 'username' in secret 'elastic-credentials'>  Optional: false
      ELASTICSEARCH_PASSWORD:  <set to the key 'password' in secret 'elastic-credentials'>  Optional: false
      KIBANA_ENCRYPTION_KEY:   <set to the key 'encryptionkey' in secret 'kibana'>          Optional: false
    Mounts:
      /usr/share/kibana/config/certs-gen/ from elastic-certificates (rw)
      /usr/share/kibana/config/kibana.yml from kibanaconfig (rw,path="kibana.yml")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6p92j (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  elastic-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elastic-certificates
    Optional:    false
  kibanaconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kibana-kibana-config
    Optional:  false
  kube-api-access-6p92j:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  112s               default-scheduler  Successfully assigned default/kibana-kibana-79544d8d54-x4smn to disposable1
  Normal   Pulled     111s               kubelet            Container image "docker.elastic.co/kibana/kibana:7.16.3" already present on machine
  Normal   Created    111s               kubelet            Created container kibana
  Normal   Started    111s               kubelet            Started container kibana
  Warning  Unhealthy  2s (x11 over 92s)  kubelet            Readiness probe failed: Error: Got HTTP code 503 but expected a 200

kubectl describe pod/Elasticsearch-master-0

Name:         elasticsearch-master-0
Namespace:    default
Priority:     0
Node:         disposable1/redacted
Start Time:   Thu, 17 Feb 2022 10:13:08 +0100
Labels:       app=elasticsearch-master
              chart=elasticsearch
              controller-revision-hash=elasticsearch-master-75677f4c46
              release=elasticsearch
              statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations:  cni.projectcalico.org/containerID: ab8958d4440b27eb0948c90b3697fbb95f20faf8a3bc20969ce988f5b9e3408c
              cni.projectcalico.org/podIP: 192.168.47.13/32
              cni.projectcalico.org/podIPs: 192.168.47.13/32
              configchecksum: 490c089a5be33d334507cb4fe55645f1b2bbae7a8167caf4a57710ff4a85fc2
Status:       Running
IP:           192.168.47.13
IPs:
  IP:           192.168.47.13
Controlled By:  StatefulSet/elasticsearch-master
Init Containers:
  configure-sysctl:
    Container ID:  containerd://04b549844c8198b1ee87504fbfae2f33725320af56902a640652198248dcc5b8
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    Image ID:      docker.elastic.co/elasticsearch/elasticsearch@sha256:0efc3a054ae97ad00cccc33b9ef79ec022970b2a9949893db4ef199edcdca2ce
    Port:          <none>
    Host Port:     <none>
    Command:
      sysctl
      -w
      vm.max_map_count=262144
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 17 Feb 2022 10:13:09 +0100
      Finished:     Thu, 17 Feb 2022 10:13:09 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5qlm (ro)
Containers:
  elasticsearch:
    Container ID:   containerd://d13da1566f45f7806a0c04c14c5ed7548a8550aa491967124d03b4bc4e61d8b0
    Image:          docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    Image ID:       docker.elastic.co/elasticsearch/elasticsearch@sha256:0efc3a054ae97ad00cccc33b9ef79ec022970b2a9949893db4ef199edcdca2ce
    Ports:          9200/TCP, 9300/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 17 Feb 2022 10:13:10 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Readiness:  exec [bash -c set -e
# If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=yellow&timeout=1s" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file

# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no

http () {
  local path="${1}"
  local args="${2}"
  set -- -XGET -s

  if [ "$args" != "" ]; then
    set -- "$@" $args
  fi

  if [ -n "${ELASTIC_PASSWORD}" ]; then
    set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
  fi

  curl --output /dev/null -k "$@" "https://127.0.0.1:9200${path}"
}

if [ -f "${START_FILE}" ]; then
  echo 'Elasticsearch is already running, lets check the node is healthy'
  HTTP_CODE=$(http "/" "-w %{http_code}")
  RC=$?
  if [[ ${RC} -ne 0 ]]; then
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} https://127.0.0.1:9200/ failed with RC ${RC}"
    exit ${RC}
  fi
  # ready if HTTP code 200, 503 is tolerable if ES version is 6.x
  if [[ ${HTTP_CODE} == "200" ]]; then
    exit 0
  elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
    exit 0
  else
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} https://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
    exit 1
  fi

else
  echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=yellow&timeout=1s" )'
  if http "/_cluster/health?wait_for_status=yellow&timeout=1s" "--fail" ; then
    touch ${START_FILE}
    exit 0
  else
    echo 'Cluster is not yet ready (request params: "wait_for_status=yellow&timeout=1s" )'
    exit 1
  fi
fi
] delay=10s timeout=5s period=10s #success=3 #failure=3
    Environment:
      node.name:                             elasticsearch-master-0 (v1:metadata.name)
      cluster.initial_master_nodes:          elasticsearch-master-0,
      discovery.seed_hosts:                  elasticsearch-master-headless
      cluster.name:                          elasticsearch
      network.host:                          0.0.0.0
      cluster.deprecation_indexing.enabled:  false
      node.data:                             true
      node.ingest:                           true
      node.master:                           true
      node.ml:                               true
      node.remote_cluster_client:            true
      ELASTIC_PASSWORD:                      <set to the key 'password' in secret 'elastic-credentials'>  Optional: false
      ELASTIC_USERNAME:                      <set to the key 'username' in secret 'elastic-credentials'>  Optional: false
    Mounts:
      /usr/share/elasticsearch/config/certs-gen/ from elastic-certificates (rw)
      /usr/share/elasticsearch/config/elasticsearch.yml from esconfig (rw,path="elasticsearch.yml")
      /usr/share/elasticsearch/data from elasticsearch-master (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5qlm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  elasticsearch-master:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elasticsearch-master-elasticsearch-master-0
    ReadOnly:   false
  elastic-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elastic-certificates
    Optional:    false
  esconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      elasticsearch-master-config
    Optional:  false
  kube-api-access-x5qlm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  43m                default-scheduler  Successfully assigned default/elasticsearch-master-0 to disposable1
  Normal   Pulled     43m                kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.16.3" already present on machine
  Normal   Created    43m                kubelet            Created container configure-sysctl
  Normal   Started    43m                kubelet            Started container configure-sysctl
  Normal   Pulled     43m                kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.16.3" already present on machine
  Normal   Created    43m                kubelet            Created container elasticsearch
  Normal   Started    43m                kubelet            Started container elasticsearch
  Warning  Unhealthy  43m (x2 over 43m)  kubelet            Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=yellow&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=yellow&timeout=1s" )

Also: when connecting to either https://redacted:30002/app/kibana or https://redacted:30002/api/status there is no web-service answering at all.

Apparently kibana fails with the log entry:

{"type":"log","@timestamp":"2022-02-21T08:47:48+00:00","tags":["error","Elasticsearch-service"],"pid":7,"message":"Unable to retrieve version information from Elasticsearch nodes. unable to verify the first certificate"}

I use Let'sEncrypt certificates for Elasticsearch and kibana:
my configuration in this regard is:
kibana_values.yaml

....
extraEnvs:
  - name: "NODE_OPTIONS"
    value: "--max-old-space-size=1800"
  - name: 'ELASTICSEARCH_USERNAME'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: username
  - name: 'ELASTICSEARCH_PASSWORD'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: password
  - name: 'KIBANA_ENCRYPTION_KEY'
    valueFrom:
      secretKeyRef:
        name: kibana
        key: encryptionkey



secretMounts:
  - name: elastic-certificates
    secretName: elastic-certificates
    path: /usr/share/kibana/config/certs-gen/


kibanaConfig:
  kibana.yml: |
    server.ssl:
      enabled: true
      key: /usr/share/kibana/config/certs-gen/privkey2.pem
      certificate: /usr/share/kibana/config/certs-gen/fullchain2.pem
    xpack.reporting.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.encryptedSavedObjects.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    elasticsearch.ssl:
      certificateAuthorities: /usr/share/kibana/config/certs-gen/fullchain2.pem
      verificationMode: certificate

protocol: https
....

elastic_values.yaml

....
esConfig:
   elasticsearch.yml: |
     xpack.security.transport.ssl.enabled: true
     xpack.security.transport.ssl.verification_mode: certificate
     xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.http.ssl.enabled: true
     xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.enabled: true

extraEnvs:
  - name: ELASTIC_PASSWORD
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: password
  - name: ELASTIC_USERNAME
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: username
secretMounts:
  - name: elastic-certificates
    secretName: elastic-certificates
    path: /usr/share/elasticsearch/config/certs-gen/
protocol: https
....

The solution is to change:

kibanaConfig:
  kibana.yml: |
    server.ssl:
      enabled: true
      key: /usr/share/kibana/config/certs-gen/privkey2.pem
      certificate: /usr/share/kibana/config/certs-gen/cert2.pem
    xpack.reporting.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.encryptedSavedObjects.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    elasticsearch.ssl:
      certificateAuthorities: /usr/share/kibana/config/certs-gen/fullchain2.pem
      verificationMode: certificate

to

kibanaConfig:
  kibana.yml: |
    server.ssl:
      enabled: true
      keystore.path: /usr/share/kibana/config/certs-gen/keystore.pkcs12
      truststore.path: /usr/share/kibana/config/certs-gen/keystore.pkcs12
      keystore.password: ""
      truststore.password: ""
    xpack.reporting.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
    xpack.encryptedSavedObjects.encryptionKey: ${KIBANA_ENCRYPTION_KEY}

and

esConfig:
   elasticsearch.yml: |
     xpack.security.enabled: true
     xpack.security.transport.ssl.enabled: true
     xpack.security.transport.ssl.verification_mode: certificate
     xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/certs-gen/privkey2.pem
     xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/certs-gen/cert2.pem
     xpack.security.transport.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/certs-gen/fullchain2.pem" ]
     xpack.security.http.ssl.enabled: true
     xpack.security.http.ssl.verification_mode: certificate
     xpack.security.http.ssl.key:  /usr/share/elasticsearch/config/certs-gen/privkey2.pem
     xpack.security.http.ssl.certificate:  /usr/share/elasticsearch/config/certs-gen/cert2.pem
     xpack.security.http.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/certs-gen/fullchain2.pem" ]

to

esConfig:
   elasticsearch.yml: |
     xpack.security.transport.ssl.enabled: true
     xpack.security.transport.ssl.verification_mode: certificate
     xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.http.ssl.enabled: true
     xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs-gen/keystore.pkcs12
     xpack.security.enabled: true

both stores were generated as follows:

cat privkey2.pem > store.pem
cat cert2.pem >> store.pem
openssl pkcs12 -export -in store.pem -out keystore.pkcs12

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.