Kibana Ingress in GCP showing backend config as unhealthy eck 1.1.0

Hi Team,

I had patcheck my eck from 1.0.1 to 1.1.0 and i have added readiness probe in kibana CR , everything was fine , but recently i enabled filerealm with native realm in elasticsearch-cr like this :

 xpack.security.authc.realms.file.file1.order: 0
 xpack.security.authc.realms.native.native1.order: 1

and now kibana ingress is not working its showing backend service is unhealthy , my kibana cr readiness probe is below:

 readinessProbe:
            httpGet:
              scheme: HTTP
              path: "/login"
              port: 5601

but the kibana pod is running fine and i dont see any issue in logs , when i get the status of kibana using curl i am getting 302 which is also correct response as per other discussed threads here is the output:

HTTP/1.1 302 Found
location: /login?next=%2Fstatus
kbn-name: kibana
kbn-xpack-sig: cb89bbb4fcc97ac9262b1b2bc96554f1
cache-control: no-cache
content-length: 0
Date: Wed, 03 Jun 2020 10:09:09 GMT
Connection: keep-alive

kibana logs:

2:5601","user-agent":"kube-probe/1.16+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.90.0.172","userAgent":"10.90.0.172"},"res":{"statusCode":200,"responseTime":26,"contentLength":9},"message":"GET /login 200 26ms - 9.0B"}
{"type":"response","@timestamp":"2020-06-03T10:10:07Z","tags":[],"pid":6,"method":"get","statusCode":200,"req":{"url":"/login","method":"get","headers":{"host":"10.124.12.42:5601","user-agent":"kube-probe/1.16+","accept-encoding":"gzip","connection":"close"},"remoteAddress":"10.90.0.172","userAgent":"10.90.0.172"},"res":{"statusCode":200,"responseTime":26,"contentLength":9},"message":"GET /login 200 26ms - 9.0B"}

let me know if you need any other information.

You can upgrade to 1.1.2 to get the readiness probe fixed.

Also by default you can use both FileRealm and NativeRealm. It should not be necessary to customize xpack.security.authc.realms.*

Okay Thanks Michael for the quick reply,

i will upgade eck and keep you posted if that fixed the issue.

Hey Michael,

I updated eck image from 1.1.0 to 1.1.2 still no luck , my GCP ingress keeps on failing.

i cant see any error in pod logs or operator logs, i am using external load balancer with ingress.

 apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    name: kibana-ingress
  spec:
    rules:
    - http:
        paths:
        - path: /*
          backend:
            serviceName: kibana-config-kb-http
            servicePort: 5601

this is my ingress specification

and my kibana_Cr is as follows:

  apiVersion: kibana.k8s.elastic.co/v1
  kind: Kibana
  metadata:
    name: kibana-config
  spec:
    version: 7.4.0
    count: 1
    elasticsearchRef:
      name: "elasticsearch-config"
    http:
     service:
       spec:
         type: LoadBalancer
     tls:
        selfSignedCertificate:
          disabled: true
    # this shows how to customize the Kibana pods
    # with labels and resource limits
    podTemplate:
      metadata:
        labels:
          kibana: node
      spec:
        containers:
        - name: kibana
          resources:
            limits:
              memory: 1Gi
              cpu: 1

please let me know what i am missing now?

Using ECK 1.1.2 I had the ingress successfully running with the following manifest:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-config
spec:
  version: 7.4.0
  count: 1
  elasticsearchRef:
    name: "elasticsearch-config"
  http:
    service:
      spec:
        type: LoadBalancer
    tls:
      selfSignedCertificate:
        disabled: true
  # this shows how to customize the Kibana pods
  # with labels and resource limits
  podTemplate:
    metadata:
      labels:
        kibana: node
    spec:
      containers:
      - name: kibana
        resources:
          limits:
            memory: 1Gi
            cpu: 1
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-ingress
spec:
  backend:
    serviceName: kibana-config-kb-http
    servicePort: 5601

Could you get a kubectl describe of the ingress ?
You should something like this:

> kubectl describe ingress.extensions/kibana-ingress 
Name:             kibana-ingress
Namespace:        default
Address:          34.XX.XX.XX
Default backend:  kibana-config-kb-http:5601 (10.28.33.54:5601)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     *     kibana-config-kb-http:5601 (10.28.33.54:5601)
Annotations:
  ingress.kubernetes.io/backends:                    {"k8s-be-30662--5c0660606c074c9":"Unknown"}
  ingress.kubernetes.io/forwarding-rule:             k8s-fw-default-kibana-ingress--5c0660606c074c9
  ingress.kubernetes.io/target-proxy:                k8s-tp-default-kibana-ingress--5c0660606c074c9
  ingress.kubernetes.io/url-map:                     k8s-um-default-kibana-ingress--5c0660606c074c9
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kibana-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"kibana-config-kb-http","servicePort":5601}}}

Events:
  Type     Reason     Age                  From                     Message
  ----     ------     ----                 ----                     -------
  Normal   ADD        9m2s                 loadbalancer-controller  default/kibana-ingress
  Warning  Translate  9m2s (x4 over 9m2s)  loadbalancer-controller  error while evaluating the ingress spec: could not find service "default/kibana-config-kb-http"
  Normal   CREATE     8m9s                 loadbalancer-controller  ip: 34.XX.XX.XX

Sure Michael,

Here is the output:

C:\Program Files (x86)\Google\Cloud SDK>kubectl describe ingress kibana-ingress -n test-elk-demo1
Name:             kibana-ingress
Namespace:        test-elk-demo1
Address:          <external-address>
Default backend:  default-http-backend:80 (10.124.14.14:8080)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /*   kibana-config-kb-http:5601 (10.124.13.217:5601)
Annotations:
  ingress.kubernetes.io/backends:                    {"k8s-be-30171--b34c3dc7e9014c61":"UNHEALTHY","k8s-be-30869--b34c3dc7e9014c61":"HEALTHY"}
  ingress.kubernetes.io/forwarding-rule:             k8s2-fr-ry20l2z5-test-elk-demo1-kibana-ingress-3g346tsu
  ingress.kubernetes.io/target-proxy:                k8s2-tp-ry20l2z5-test-elk-demo1-kibana-ingress-3g346tsu
  ingress.kubernetes.io/url-map:                     k8s2-um-ry20l2z5-test-elk-demo1-kibana-ingress-3g346tsu
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kibana-ingress","namespace":"test-elk-demo1"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"kibana-config-kb-http","servicePort":5601},"path":"/*"}]}}]}}

It seems that a healthy backend is detected: "k8s-be-30869--b34c3dc7e9014c61":"HEALTHY"

Could you check again the connectivity and see if there is any other message in the cloud console ?

Sure Michael, let me check with the team if this is some networking issue from the cluster perspective.

thanks for your help

there was some unwanted network policies which was creating this issue, thanks Michael for you help, Now the ingress is working fine