Problems with Traefik/Keycloak in front of Kibana

Hi!

I want to use Keycloak as a standard way of authenticating users to applications running in our Kubernetes clusters. One of the clusters is running the Elastic ECK component (v1.1.1) and we use the operator to deploy Elastic clusters and Kibana as a frontend. In order to keep things as simple as possible I’ve done the following.

Deployed Kibana

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: {{ .Values.kibana.name }}
  namespace: {{ .Release.Namespace }}
  annotations:
    traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
spec:
  version: {{ .Values.kibana.version }}
  count: {{ .Values.kibana.instances }}
  elasticsearchRef:
    name: {{ .Values.kibana.elasticCluster }}
    namespace: {{ .Release.Namespace }}
  podTemplate:
    spec:
      containers:
      - name: kibana
        env:
        - name: SERVER_BASEPATH
          value: {{ .Values.kibana.serverBasePath }}
        resources:
          requests:
            {{- if not .Values.kibana.cpu.enableBurstableQoS }}
            cpu: {{ .Values.kibana.cpu.requests }}
            {{- end }}
            memory: {{ .Values.kibana.memory.requests }}Gi
          limits:
            {{- if not .Values.kibana.cpu.enableBurstableQoS }}
            cpu: {{ .Values.kibana.cpu.limits }}
            {{- end }}
            memory: {{ .Values.kibana.memory.limits }}Gi
  http:
    tls:
      selfSignedCertificate:
        disabled: true

Created Ingress

apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: kibana-{{ .Values.kibana.name }}-stripprefix
  namespace: {{ .Release.Namespace }}
spec:
  stripPrefix:
    prefixes: 
      - {{ .Values.kibana.serverBasePath }}
    forceSlash: true

---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.kibana.name }}-ingress
  namespace: {{ .Release.Namespace }}
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: http
    traefik.ingress.kubernetes.io/router.middlewares: {{ .Release.Namespace }}-kibana-{{ .Values.kibana.name }}-stripprefix@kubernetescrd
spec:
  rules:
  - http:
      paths:
      - path: {{ .Values.kibana.serverBasePath }}
        backend:
            servicePort: {{ .Values.kibana.port }}
            serviceName: {{ .Values.kibana.name }}-kb-http

Result
Deploying the above works perfectly fine. I’m able to reach the Kibana UI using the external IP exposed by our MetalLB component. I simply enter http://external IP/service/logging/kibana and I’m presented to the Kibana log in screen and I can log on using the “built in” authentication process.

Adding the Keycloak Gatekeeper
Now, if I add the following to the Kibana manifest, effectively adding the Keycloak Gatekeeper sidecar to the Kibana Pod:

  - name: {{ .Values.kibana.name }}-gatekeeper
    image: "{{ .Values.kibana.keycloak.gatekeeper.repository }}/docker-r/keycloak/keycloak-gatekeeper:{{ .Values.kibana.keycloak.gatekeeper.version }}"
    args:
      - --config=/etc/keycloak-gatekeeper.conf
    ports:
      - containerPort: 3000
        name: proxyport
    volumeMounts:
    - name: gatekeeper-config
      mountPath: /etc/keycloak-gatekeeper.conf
      subPath: keycloak-gatekeeper.conf
  volumes:
    - name: gatekeeper-config
      configMap:
        name: {{ .Release.Name }}-gatekeeper-config

with the following ConfigMap which is "mounted":

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-gatekeeper-config 
  namespace: {{ .Release.Namespace }}
data: 
  keycloak-gatekeeper.conf: |+
    redirection-url: {{ .Values.kibana.keycloak.gatekeeper.redirectionUrl }}
    discovery-url: https://.../auth/realms/{{ .Values.kibana.keycloak.gatekeeper.realm }}
    skip-openid-provider-tls-verify: true
    client-id: kibana
    client-secret: {{ .Values.kibana.keycloak.gatekeeper.clientSecret }}
    enable-refresh-tokens: true
    encryption-key: ...
    listen: :3000
    tls-cert:
    tls-private-key:
    secure-cookie: false
    upstream-url: {{ .Values.kibana.keycloak.gatekeeper.upstreamUrl }}
    resources:
    - uri: /*
    groups:
    - kibana

The upstream-url points to http://127.0.0.1:5601

and add an intermediary service:
In order to explicitly address the Gatekeeper proxy I added another service, “keycloak-proxy” as such:

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.kibana.name }}-keycloak-proxy
  namespace: {{ .Release.Namespace }}
spec:
  type: ClusterIP
  selector:
    common.k8s.elastic.co/type: kibana
    kibana.k8s.elastic.co/name: cap-logging
  ports:
    - name: http
      protocol: TCP
      port: 8888
      targetPort: proxyport

and change the backend definition in the Kibana definition to:

servicePort: 8888
serviceName: {{ .Values.kibana.name }}-keycloak-proxy

and then issue the same URL as above, http://external IP/service/logging/kibana, I’m redirected to http://external IP/oauth/authorize?state=0db97b79-b980-4cdc-adbe-707a5e37df1b and get an “404 Page not found” error.

If I reconfigure the “keycloak-proxy” service and convert it into a NodePort and expose it on, say, port 32767 and issue an http://host IP:32767 I’m presented to the Keycloak login screen on the Keycloak server!

If I look into the Gatekeeper startup log I find the following:

1.6018108005048046e+09 info starting the service {"prog": "keycloak-gatekeeper", "author": "Keycloak", "version": "7.0.0 (git+sha: f66e137, built: 03-09-2019)"}
1.6018108005051787e+09 info attempting to retrieve configuration discovery url {"url": "https://.../auth/realms/...", "timeout": "30s"}
1.601810800537417e+09 info successfully retrieved openid configuration from the discovery
1.6018108005392597e+09 info enabled reverse proxy mode, upstream url {"url": "http://127.0.0.1:5601"}
1.6018108005393562e+09 info using session cookies only for access and refresh tokens
1.6018108005393682e+09 info protecting resource {"resource": "uri: /*, methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT,TRACE, required: authentication only"}
1.6018108005398147e+09 info keycloak proxy service starting {"interface": ":3000"}

This is what I get when I try to access Kibana through the Gatekeeper proxy:
http://host/service/logging/kibana (gets redirected to) http://host/oauth/authorize?state=4dbde9e7-674c-4593-83f2-a8e5ba7cf6b5

and the Gatekeeper log:
1.601810963344485e+09 error no session found in request, redirecting for authorization {"error": "authentication session not found"}

I've been struggling with this for some time now and seems to be stuck! If anybody here "knows what's going on" I'd be very grateful.

This looks like a Gatekeeper redirect, as it's not a type of URL that Kibana redirects to.
I'll leave this open in case there is somebody else who has hit this, but you'll get more help on Keycloak Gatekeeper based forum/support.

Hi Marius!

Thanks for your reply! Yes, you're 100% correct in you statement that this is related to Keycloak/Gatekeeper. I do have a Kibana related question though which is related to this. The way I've included the Gatekeeper sidecar i by adding another container in the Kibana deployment manifest such as:

  podTemplate:
    spec:
      containers:


      - name: kibana
        env:
        - name: SERVER_BASEPATH
          value: {{ .Values.kibana.serverBasePath }}
        resources:
          requests:
            {{- if not .Values.kibana.cpu.enableBurstableQoS }}
            cpu: {{ .Values.kibana.cpu.requests }}
            {{- end }}
            memory: {{ .Values.kibana.memory.requests }}Gi
          limits:
            {{- if not .Values.kibana.cpu.enableBurstableQoS }}
            cpu: {{ .Values.kibana.cpu.limits }}
            {{- end }}

            memory: {{ .Values.kibana.memory.limits }}Gi

# Gatekeeper proxy goes here...
      - name: {{ .Values.kibana.name }}-gatekeeper
        image: "{{ .Values.kibana.keycloak.gatekeeper.repository }}/docker-r/keycloak/keycloak-gatekeeper:{{ .Values.kibana.keycloak.gatekeeper.version }}"
        args:
          - --config=/etc/keycloak-gatekeeper.conf
        ports:
          - containerPort: 3000
            name: proxyport
        volumeMounts:
        - name: gatekeeper-config
          mountPath: /etc/keycloak-gatekeeper.conf
          subPath: keycloak-gatekeeper.conf
      volumes:
        - name: gatekeeper-config
          configMap:
            name: {{ .Release.Name }}-gatekeeper-config

I know that this is a "far fetched" question but could there be any NetworkPolicy, RBAC related issue here which is affecting the way things work? Elastic runs in namespace "service-elastic" and Traefik is running in namespace "service-traefik". I personally don't think these have anything to do with the problem I experience but maybe they have...

On a sidenote would it be "wiser" to add the Gatekeeper proxy within the Kibana CRD? I'm reluctant to this as I'd very much prefer to leave the Elastic CRDs as they are.

Your thoughts on this are very welcome.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.