7.2.0 - kubernetes - why trying to start as root?

I have a problem running kibana 7.2.0 in kubernetes.

Here is my yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: poc-kibana
  labels:
    cluster: poc
spec:
  selector:
    matchLabels:
      app: kibana
      cluster: poc
  replicas: 1
  template:
    metadata:
      labels:
        app: kibana
        cluster: poc
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.2.0
        ports:
        - containerPort: 5601
          name: http

The container is always failing, the logs are showing:
Kibana should not be run as root. Use --allow-root to continue.

When I just use image tag 7.1.1 it works fine, but I want to update this cluster to 7.2.0.

Description of the pod:

kubectl describe pod poc-kibana-5c7bf758c9-7jjlc
Name:               poc-kibana-5c7bf758c9-7jjlc
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               server/x.x.x.x
Start Time:         Mon, 15 Jul 2019 09:53:44 +0200
Labels:             app=kibana
                    cluster=poc
                    pod-template-hash=5c7bf758c9
Annotations:        cni.projectcalico.org/podIP: 192.168.179.159/32
Status:             Running
IP:                 192.168.179.159
Controlled By:      ReplicaSet/poc-kibana-5c7bf758c9
Containers:
  kibana:
    Container ID:   docker://7d517c8a77a17cbde3b3238c577c51379613ef3a4b6d220f484974b4b54b7ff9
    Image:          docker.elastic.co/kibana/kibana:7.2.0
    Image ID:       docker-pullable://docker.elastic.co/kibana/kibana@sha256:1579f95db4242327cf0637a79cbbe095fcae11a772e324482adf4fe0e0b3ac82
    Port:           5601/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 15 Jul 2019 09:56:52 +0200
      Finished:     Mon, 15 Jul 2019 09:56:52 +0200
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g6n94 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-g6n94:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-g6n94
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Pulled     5m55s (x5 over 7m32s)   kubelet, serverContainer image "docker.elastic.co/kibana/kibana:7.2.0" already present on machine
  Normal   Created    5m55s (x5 over 7m32s)   kubelet, server Created container kibana
  Normal   Started    5m55s (x5 over 7m32s)   kubelet, server Started container kibana
  Normal   Scheduled  5m38s                   default-scheduler  Successfully assigned default/poc-kibana-5c7bf758c9-7jjlc to server
  Warning  BackOff    2m22s (x25 over 7m30s)  kubelet, server Back-off restarting failed container

I have also installed Elastic Cloud on Kubernetes, but I am not using it here. Just to be mentioned if this may lead to this problem.

When starting the container just directly with docker (sudo docker run --it -name=debug --rm docker.elastic.co/kibana/kibana:7.2.0 it is starting correctly.

Any ideas are welcome.
Regards, Andreas

@asp on the last elastic helm charts a change was introduced to make sure we run kibana as non root. Could you use that last helm charts or apply the same configuration https://github.com/elastic/helm-charts/pull/172 ?

thanks for the reply, unfortunately I am currently unfamilar with helm (syntax, etc), so it is quite difficult for me to find the configuration lines which are needed to run kibana as non-root.

Also I don't understand why the container works right in pure docker (docker run) and have problems when running as pod in kubernetes.

Could you please be so kind and point to the needed configuration strings I need to use?

In future I will also give helm a try, but I need to wait since helm charts are GA, because I need my stack go productive soon.

Thanks, Andreas

ahhhh....

spec:
      securityContext:
        #capabilities.drop:[ALL]
        runAsNonRoot: true
        runAsUser: 1000

this did the trick.

But just for understanding: is the container recognizing if it runs in kubernetes or in plain docker? Because I don't understand why it is working in docker but not in kubernetes without this config.

@asp our docker image is always configured to run as a non-root user https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/dockerfile.template.js#L81

I think that's why it always work when running as a plain docker container.

I'm not sure, but what I'm thinking is that maybe your configurations for kubernetes or the default one for the securityContext (when u dont provide one) will endup running things as root which ultimately will trigger the problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.