KIbana pods getting created than given replica count

We had deployed eck(1.3.0) in a GKE cluster with 2 data-ingest and 3 master pods and 1 kibana pod(7.9.0). We had given 1 replica count for kibana configuration and deployed. But may kibana pods are getting created and terminated. Also, one pod is being stable.

kubectl -n <ns> get po
NAME                                                     READY   STATUS        RESTARTS   AGE
elastic-operator-0                                       1/1     Running       0          8h
elasticsearch-config-es-data-ingest-0                    1/1     Running       0          6h32m
elasticsearch-config-es-data-ingest-1                    1/1     Running       0          6h32m
elasticsearch-config-es-master-0                         1/1     Running       0          6h32m
elasticsearch-config-es-master-1                         1/1     Running       0          6h32m
elasticsearch-config-es-master-2                         1/1     Running       0          6h32m
kibana-config-kb-794684f946-42c98                        0/1     Terminating   0          17s
kibana-config-kb-794684f946-fbdlk                        0/1     Terminating   0          29s
kibana-config-kb-794684f946-flddp                        0/1     Terminating   0          22s
kibana-config-kb-794684f946-j6vfp                        0/1     Terminating   0          40s
kibana-config-kb-794684f946-jxzfn                        1/1     Terminating   0          52s
kibana-config-kb-794684f946-p2gnd                        1/1     Terminating   0          60s
kibana-config-kb-794684f946-q9s8h                        0/1     Terminating   0          7s
kibana-config-kb-794684f946-v97cw                        0/1     Terminating   0          2s
kibana-config-kb-84588c8576-crrhj                        1/1     Running       0          8h

Below were the logs of operator pod related to kibana

{"log.level":"error","@timestamp":"2021-06-22T04:13:54.213Z","log.logger":"controller","message":"Reconciler error","service.version":"1.3.0+6db1914b","service.type":"eck","ecs.version":"1.4.0","controller":"kibana-controller","name":"quickstart","namespace":"do-es-kib","error":"Operation cannot be fulfilled on kibanas.kibana.k8s.elastic.co \"quickstart\": the object has been modified; please apply your changes to the latest version and try again","error.stack_trace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:246\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:90"}

Please let me know if any info is needed for the investigation.

If you could share the YAML manifests you used that could be helpful. I would also look at the Kibana logs. We have some guidance how to extract application logs here: Troubleshooting methods | Elastic Cloud on Kubernetes [1.6] | Elastic

kibana.yaml:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-config
spec:
  version: "{{ .Values.appTag }}"
  image: "{{ .Values.image.repository }}/kibana:{{ .Values.appTag }}"
  count: 1
  elasticsearchRef:
    name: "elasticsearch-config"
  # this shows how to customize the Kibana pods
  # with labels and resource limits
  podTemplate:
    metadata:
      labels:
        kibana: node
    spec:
      containers:
      - name: kibana
        resources:
          requests:
            memory: {{ .Values.resources.requestmemory}}
            cpu: {{ .Values.resources.requestcpu}}
          limits:
            memory: {{ .Values.resources.limitmemory}}
            cpu: {{ .Values.resources.limitcpu}}
        

I was unable to see the logs of kibana pods as they were continuously terminating and creating. Placing the logs of stable kibana pod

{"type":"response","@timestamp":"2021-06-22T14:05:09Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":29,"contentLength":9},"message":"GET /login 200 29ms - 9.0B"}
{"type":"response","@timestamp":"2021-06-22T14:05:19Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":29,"contentLength":9},"message":"GET /login 200 29ms - 9.0B"}
{"type":"response","@timestamp":"2021-06-22T14:05:29Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":27,"contentLength":9},"message":"GET /login 200 27ms - 9.0B"}
{"type":"response","@timestamp":"2021-06-22T14:05:39Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":29,"contentLength":9},"message":"GET /login 200 29ms - 9.0B"}
{"type":"response","@timestamp":"2021-06-22T14:05:49Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":25,"contentLength":9},"message":"GET /login 200 25ms - 9.0B"}
{"type":"response","@timestamp":"2021-06-22T14:05:59Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":32,"contentLength":9},"message":"GET /login 200 32ms - 9.0B"}
{"type":"response","@timestamp":"2021-06-22T14:06:09Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":27,"contentLength":9},"message":"GET /login 200 27ms - 9.0B"}
{"type":"response","@timestamp":"2021-06-22T14:06:19Z","tags":[],"pid":7,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"240.0.16.228:5601","user-agent":"kube-probe/1.18+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.99.0.51","userAgent":"10.99.0.51"},"res":{"statusCode":200,"responseTime":25,"contentLength":9},"message":"GET /login 200 25ms - 9.0B"}

@pebrc : Can you please suggest

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.