ECK with Openshift oauth

I have successfully installed ECK operator in openshift. Now I am trying to add openshift oauth proxy to kibana, so that users can be authenticated with ldap. Created a SA, but when deploy kibana with oaauth-proxy running into below error. Looking inside the Kibana pod, it doesnot have the /var/run/secrets/kubernetes.io folder.

main.go:138: Invalid configuration:
  cannot read client-secret-file: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
  missing setting: client-id
  missing setting: client-secret"

deployment yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: bcnc-logging
spec:
  version: 7.6.1
  count: 1
  elasticsearchRef:
    name: "elasticsearch"
  podTemplate:
    spec:
      containers:
      - name: kibana
        resources:
          limits:
            memory: 4Gi
            cpu: 1
      - name: kibana-proxy
        image: 'registry.redhat.io/openshift3/oauth-proxy:latest'
        imagePullPolicy: IfNotPresent
        args:
          - -provider=openshift
          - -https-address=:3000
          - -http-address=
          - -email-domain=*
          - -upstream=http://localhost:5601
          - -openshift-service-account=bcnc-logging-sa
          - -cookie-secret-file=/etc/proxy/secret/session_secret
          - -tls-cert=/etc/tls/private/tls.crt
          - -tls-key=/etc/tls/private/tls.key
          - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
          - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          - -skip-provider-button=true
        env:
          - name: OAP_DEBUG
            value: 'False'
          - name: OCP_AUTH_PROXY_MEMORY_LIMIT
            valueFrom:
              resourceFieldRef:
                containerName: kibana-proxy
                divisor: '0'
                resource: limits.memory
        ports:
          - containerPort: 3000
            name: oaproxy
            protocol: TCP
        resources:
          limits:
            memory: 256Mi
          requests:
            cpu: 100m
            memory: 256Mi
        volumeMounts:
          - mountPath: /etc/tls/private
            name: secret-bcnc-logging-tls
          - mountPath: /etc/proxy/secret
            name: kibana-proxy
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
#      serviceAccount: bcnc-logging-sa
      serviceAccountName: bcnc-logging-sa
      terminationGracePeriodSeconds: 30
      volumes:
        - name: secret-bcnc-logging-tls
          secret:
            defaultMode: 420
            secretName: bcnc-logging-tls
        - name: kibana-proxy
          secret:
            defaultMode: 420
            secretName: bcnc-logging-proxy
---
apiVersion: v1
kind: Route
metadata:
  name: kibana
  namespace: bcnc-logging
spec:
  host: kibana-bcnc-logging.ip.dev.aws.com
  tls:
    termination: passthrough # Kibana is the TLS endpoint
    insecureEdgeTerminationPolicy: Redirect
  to:
    kind: Service
    name: kibana-kb-http

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.openshift.io/serving-cert-secret-name: bcnc-logging-tls
  labels:
    common.k8s.elastic.co/type: kibana
    kibana.k8s.elastic.co/name: kibana
  name: kibana-kb-http
  namespace: bcnc-logging
spec:
  ports:
  - name: https
    port: 5601
    protocol: TCP
    targetPort: 5601
  # - name: proxy
  #   port: 3000
  #   protocol: TCP
  #   targetPort: oaproxy
  selector:
    common.k8s.elastic.co/type: kibana
    kibana.k8s.elastic.co/name: kibana
  sessionAffinity: None
  type: ClusterIP

---
apiVersion: v1
data:
  session_secret: XXXXXXXXXXXXXXXX
kind: Secret
metadata:
  name: bcnc-logging-proxy
  namespace: bcnc-logging
type: Opaque

---
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    serviceaccounts.openshift.io/oauth-redirectreference.kibana: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"kibana"}}'
  name: bcnc-logging-sa
  namespace: bcnc-logging

Answered in the duplicate github issue: https://github.com/elastic/cloud-on-k8s/issues/2753

Thanks @Anya_Sabo, it works now.

Hello Team,
I am trying to deploy kibana with oauth-proxy but facing same error which is encountering in kibana-proxy pod.

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: elastic
spec:
  version: 7.12.1
  count: 1
  elasticsearchRef:
    name: "elasticsearch-sample"
  podTemplate:
    spec:
      containers:
      - name: kibana
        resources:
          limits:
            memory: 1Gi
            cpu: 1
      - name: kibana-proxy
        image: 'registry.redhat.io/openshift3/oauth-proxy:latest'
        imagePullPolicy: IfNotPresent
        args:
          - -provider=openshift
          - -https-address=:3000
          - -http-address=
          - -email-domain=*
          - -upstream=http://localhost:5601
          - -openshift-service-account=elastic-sa
          - -cookie-secret-file=/etc/proxy/secret/session_secret
          - -tls-cert=/etc/tls/private/tls.crt
          - -tls-key=/etc/tls/private/tls.key
          - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
          - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        env:
          - name: OAP_DEBUG
            value: 'False'
          - name: OCP_AUTH_PROXY_MEMORY_LIMIT
            valueFrom:
              resourceFieldRef:
                containerName: kibana-proxy
                divisor: '0'
                resource: limits.memory
        ports:
          - containerPort: 3000
            name: oaproxy
            protocol: TCP
        resources:
          limits:
            memory: 256Mi
          requests:
            cpu: 100m
            memory: 256Mi
        volumeMounts:
          - mountPath: /etc/tls/private
            name: secret-elastic-tls
          - mountPath: /etc/proxy/secret
            name: kibana-proxy
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: elastic-sa
      serviceAccountName: elastic-sa
      terminationGracePeriodSeconds: 30
      volumes:
        - name: secret-elastic-tls
          secret:
            defaultMode: 420
            secretName: elastic-tls
        - name: kibana-proxy
          secret:
            defaultMode: 420
            secretName: elastic-proxy
---
apiVersion: v1
kind: Route
metadata:
  name: kibana
  namespace: elastic
spec:
  tls:
    termination: passthrough # Kibana is the TLS endpoint
    insecureEdgeTerminationPolicy: Redirect
  to:
    kind: Service
    name: kibana-kb-http

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.openshift.io/serving-cert-secret-name: elastic-tls
  labels:
    common.k8s.elastic.co/type: kibana
    kibana.k8s.elastic.co/name: kibana
  name: kibana-kb-http
  namespace: elastic
spec:
  ports:
  - name: https
    port: 5601
    protocol: TCP
    targetPort: 5601
  - name: proxy
    port: 3000
    protocol: TCP
    targetPort: oaproxy
  selector:
    common.k8s.elastic.co/type: kibana
    kibana.k8s.elastic.co/name: kibana
  sessionAffinity: None
  type: ClusterIP

---
apiVersion: v1
data:
  session_secret: XXXXXXXXXXXXXXXX
kind: Secret
metadata:
  name: elastic-proxy
  namespace: elastic
type: Opaque

---
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    serviceaccounts.openshift.io/oauth-redirectreference.kibana: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"kibana"}}'
  name: elastic-sa
  namespace: elastic
automountServiceAccountToken: false

I am using Openshift verison 4.6 and using latest oauth proxy image.
Registry: registry.redhat.io

Repository: openshift4/ose-oauth-proxy

Error message on kibana-proxxy pod

oc logs kibana-kb-774b7d5548-ntmsj -n elastic kibana-proxy
2021/05/26 16:57:07 main.go:140: Invalid configuration:
  cannot read client-secret-file: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
  missing setting: client-id
  missing setting: client-secret

I have tried ECK with Openshift oauth · Issue #2753 · elastic/cloud-on-k8s · GitHub also but didn't get success.please help me.