Add_kubernetes_metadata with elatsicsearch on kubernetes, elasticsearch-operator does not add k8s field in kibana

hi, I've got a problem with add_kubernetes_metadata setting in my filebeat.yaml

set-up with Elasticsearch-operator and Elasticsearch, metricbeat and filebeat, kibana as elasticsearch.k8s.elastic.co/v1, beat.k8s.elastic.co/v1beta1, kibana.k8s.elastic.co/v1.

problem is finding kubernetes metadata in filebeat.

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: filebeat
spec:
  type: filebeat
  version: 8.2.0
  elasticsearchRef:
    name: elasticsearch
  kibanaRef:
    name: kibana
  config:
    filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
    processors:
      - add_kubernetes_metadata:
          default_matchers.enabled: false
          host: ${NODE_NAME}
          matchers:
            - logs_path:
                logs_path: /var/log/containers/
  daemonSet:
    podTemplate:
      spec:
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
        securityContext:
          runAsUser: 0
        containers:
          - name: filebeat
            env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
            volumeMounts:
              - name: varlogcontainers
                mountPath: /var/log/containers
              - name: varlogpods
                mountPath: /var/log/pods
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
        volumes:
          - name: varlogcontainers
            hostPath:
              path: /var/log/containers
          - name: varlogpods
            hostPath:
              path: /var/log/pods
          - name: varlibdockercontainers
            hostPath:
              path: /var/lib/docker/containers

but kibana still shows 17 original fields.
how to fix it?

sloved! needs clusterRole and serviceAccount.