Hello .
I am running Elasticsearch and kibana in my minikube , and when i describe the pod of kibana i am getting below error . whereas , Elasticsearch and kibana pods shows running , but kibana pod shows 0/1 .
Below is the error i m getting during the describe of a pod of kibana
Readiness probe failed: Error: Got HTTP code 503 but expected a 200
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 15m
kibana-test24-kibana-6cbd58b6fc-nmhk2 0/1 Running 0 4m16s
when we do kubectl get pods ,
below are the results
Here kibana pod shows 0/1 and below is the kibanavalues.yaml .
---
#elasticsearchURL: "" # "http://elasticsearch-master:9200"
elasticsearchHosts: "https://elasticsearch-master:9200"
#elasticsearchHosts: "elasticsearch.local"
#elasticsearch:
# hosts: "https://elasticsearch-master:9200"
# serviceAccountTokenSecret: "kibana-service-token" # Reference the secret
ssl:
enabled: false
#extraEnvs:
# - name: ELASTICSEARCH_SERVICE_TOKEN
# valueFrom:
# secretKeyRef:
# name: kibana-service-token
# key: service-token
#elasticsearch.username: "elastic"
#elasticsearch.password: "7oFrF3WhbVsWXH51"
elasticsearch.ssl.verificationMode: "none"
replicas: 1
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
#extraEnvs: []
extraEnvs:
- name: NODE_TLS_REJECT_UNAUTHORIZED
value: "0"
# - name: ELASTICSEARCH_USERNAME
# valueFrom:
# secretKeyRef:
# name: my-secret
# key: elastic
# - name: ELASTICSEARCH_PASSWORD
# valueFrom:
# secretKeyRef:
# name: my-secret
# key: password
- name: ELASTICSEARCH_SERVICE_TOKEN
valueFrom:
secretKeyRef:
name: kibana-service-token
key: service_token
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: kibana-keystore
# secretName: kibana-keystore
# path: /usr/share/kibana/data/kibana.keystore
# subPath: kibana.keystore # optional
image: "docker.elastic.co/kibana/kibana"
imageTag: "8.5.1"
imagePullPolicy: "IfNotPresent"
# additionals labels
labels: {}
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
resources:
requests:
cpu: "100m"
memory: "500Mi"
limits:
cpu: "1000m"
memory: "1Gi"
protocol: https
serverHost: "0.0.0.0"
healthCheckPath: /api/status
# Allows you to add any config files in /usr/share/kibana/config/
# such as kibana.yml
kibanaConfig:
kibana.yml: |
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: ["https://elasticsearch-master:9200"]
elasticsearch.serviceAccountToken: "${ELASTICSEARCH_SERVICE_TOKEN}"
elasticsearch.ssl.verificationMode: "none"
# kibana.yml: |
# key:
# nestedkey: value
# If Pod Security Policy in use it may be required to specify security context as well as service account
podSecurityContext:
fsGroup: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
serviceAccount: ""
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
httpPort: 5601
extraContainers: ""
# - name: dummy-init
# image: busybox
# command: ['echo', 'hey']
extraInitContainers: ""
# - name: dummy-init
# image: busybox
# command: ['echo', 'hey']
updateStrategy:
type: "Recreate"
service:
type: ClusterIP
port: 5601
# nodePort: ""
# labels: {}
# annotations: {}
# cloud.google.com/load-balancer-type: "Internal"
# service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
# service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
# service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
# loadBalancerSourceRanges: []
# 0.0.0.0/0
ingress:
enabled: true
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/secure-backends: "true"
className: "nginx"
pathtype: ImplementationSpecific
hosts:
- host: kibana.local
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: my-secret
hosts:
- kibana.local
elasticsearchHosts: "https://elasticsearch-master:9200"
#ingress:
# enabled: true
# className: "nginx"
# annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
# hosts:
# - host: kibana.local
# paths:
# - path: /
# pathType: Prefix
# tls: []
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# path: /
# hosts:
# - chart-example.local
# tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
#readinessProbe:
# failureThreshold: 3
# initialDelaySeconds: 10
# periodSeconds: 10
# successThreshold: 3
# timeoutSeconds: 5
readinessProbe:
httpGet:
scheme: HTTPS
path: /status
port: 5601
insecureSkipTLSVerify: true
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
successThreshold: 1 # 1 is enough for readiness
timeoutSeconds: 5
imagePullSecrets: []
nodeSelector: {}
tolerations: []
affinity: {}
nameOverride: ""
fullnameOverride: ""
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
Below is the elasticsearch values.yaml
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.roles=master
# https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#node-roles
roles:
- master
- data
- data_content
- data_hot
- data_warm
- data_cold
- ingest
- ml
- remote_cluster_client
- transform
replicas: 1
minimumMasterNodes: 1
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
createCert: true
esJvmOptions: {}
# processors.options: |
# -XX:ActiveProcessorCount=3
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# Disable it to use your own elastic-credential Secret.
secret:
enabled: true
password: "" # generated randomly if not defined
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "8.5.1"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
#esJavaOpts: "" # example: "-Xmx1g -Xms1g"
esJavaOpts: "-Xms1g -Xmx1g"
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
automountToken: true
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
annotations: {}
extraVolumes: []
# - name: extras
# emptyDir: {}
extraVolumeMounts: []
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: https
httpPort: 9200
transportPort: 9300
service:
enabled: true
labels: {}
labelsHeadless: {}
type: ClusterIP
# Consider that all endpoints are considered "ready" even if the Pods themselves are not
# https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/#ServiceSpec
publishNotReadyAddresses: false
# nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
#readinessProbe:
# failureThreshold: 3
# initialDelaySeconds: 10
# periodSeconds: 10
# successThreshold: 3
# timeoutSeconds: 5
readinessProbe:
httpGet:
path: /_cluster/health
port: 9200
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
successThreshold: 1 # 1 is enough for readiness
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publicly expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: true
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/secure-backends: "true"
className: "nginx"
pathtype: ImplementationSpecific
hosts:
- host: elasticsearch.local
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: my-secret
hosts:
- elasticsearch.local
#ingress:
# enabled: true
# annotations:
# ingress.kubernetes.io/ssl-passthrough: "true"
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/secure-backends: "true"
# nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# ingressClassName: "nginx"
# pathtype: ImplementationSpecific
# hosts:
# - host: elasticsearch.local
# paths:
# - path: /
# tls:
# - secretName: my-secret
# hosts:
# - "elasticsearch.local"
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
## In order for a Pod to access Elasticsearch, it needs to have the following label:
## {{ template "uname" . }}-client: "true"
## Example for default configuration to access HTTP port:
## elasticsearch-master-http-client: "true"
## Example for default configuration to access transport port:
## elasticsearch-master-transport-client: "true"
http:
enabled: false
## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace
## and matching all criteria can reach the DB.
## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this
## parameter to select these namespaces
##
# explicitNamespacesSelector:
# # Accept from namespaces with all those different rules (only from whitelisted Pods)
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
transport:
## Note that all Elasticsearch Pods can talk to themselves using transport port even if enabled.
enabled: false
# explicitNamespacesSelector:
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
tests:
enabled: true
Even for elasticsearch when we describe pod i am getting this warning
Warning Unhealthy 22m (x2 over 22m) kubelet Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )