Gcp health check not working on a securised kibana with xPack

0

I'm configuring a securised ELK cluster on GKE using the free xpack Basic authentication.

I've build a k8s StatefullSet elastcisearch manifest with xpack-security-enabled to true and so on. My kibana deployment has a readinessProbes pointing to '/api/status' with a Authorization header containing the correct base64 user:password encoding.

Here is my kibana deployment and the associated Ingress :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.1.1
        livenessProbe:
          httpGet:
            path: /api/status
            port: 5601
            httpHeaders:
            - name: Authorization
              value: Basic blabla==
          initialDelaySeconds: 40
          timeoutSeconds: 5
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/status
            port: 5601
            httpHeaders:
              - name: Authorization
                value: Basic blabla==
          initialDelaySeconds: 40
          failureThreshold: 3
          timeoutSeconds: 5
          periodSeconds: 10
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
          - name: ELASTICSEARCH_USERNAME
            value: kibana
          - name: ELASTICSEARCH_PASSWORD
            value: blabla
        ports:
        - name: kibana
          containerPort: 5601
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana
  namespace: kube-logging
spec:
  backend:
    serviceName: kibana
    servicePort: 5601

When I apply the ingress, GCP create automatically an HTTP health check for the load balancer on the path '/' and expect a 200 code status. But kibana expect an Authorization header to respond a 200.

If I manually update the HTTP load balancer health check to a TCP one, everything is fine but GCP automatically revert my change and my kibana deployment become inaccessible again

@manuBriot looking at https://github.com/kubernetes/ingress-gce/issues/42 it seems like this is a known issue with the GCE ingress controller. As far as I can see Kibana is working as expected in this case?

Reading through it sounds like the health checks are supposed to match the readiness probe but it has caveats and defaults to checking the root path with no headers. Sounds like they intent to allow it to be configured via BackendConfig custom resource definition at some point: https://github.com/kubernetes/ingress-gce/issues/42#issuecomment-405386857

Offhand I can't see a way to work around and still use the GCE ingress controller. It might be worth exploring the nginx ingress controller as an option?

Hello Peter,

the workaround I choosed was to expose kibana via a Kubernetes clusterIp service.

Nginx ingress controller could be fine but I think we can't mix different type of ingress controller inside a K8s cluster (really not sure about this)

Thank you anyway

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.