Filebeat not collecting logs from Kubernetes pods in specified namespace

We have a pod running in a namespace called ceo-qa1. We tried to configure the manifest file to run Filebeat in the ceo-qa1 namespace. However we are not seeing the expected index .ds-logs-ceo-api* in Kibana Dev Tools.

We did notice a warning that does not appear when we ran kubectl create -f on the original manifest file

$ kubectl create -f filebeat-qa-namespace.yaml
serviceaccount/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
role.rbac.authorization.k8s.io/filebeat created
role.rbac.authorization.k8s.io/filebeat-kubeadm-config created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
rolebinding.rbac.authorization.k8s.io/filebeat created
rolebinding.rbac.authorization.k8s.io/filebeat-kubeadm-config created
configmap/filebeat-config created
Warning: would violate PodSecurity "baseline:latest": host namespaces (hostNetwork=true), hostPath volumes (volumes "varlibdockercontainers", "varlog", "data")
daemonset.apps/filebeat created

Here are relevant lines from the manifest file

apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: ceo-qa1
  labels:
    k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  # should be the namespace where filebeat is running
  namespace: ceo-qa1
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: ceo-qa1
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: ceo-qa1
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: ceo-qa1
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: ceo-qa1
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: ceo-qa1
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: ceo-qa1
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: ceo-qa1
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: filestream
      id: kubernetes-container-logs
      paths:
        - /var/log/containers/*.log
      fields:
        data_stream.type: logs
        data_stream.dataset: ceo
        data_stream.namespace: api
        app_id: ceo-api-qa1
      parsers:
        - container: ~
      prospector:
        scanner:
          fingerprint.enabled: true
          symlinks: true
      file_identity.fingerprint: ~
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            namespace: ceo-qa1
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

The reason we are still trying to deploy Filebeat from at least two different manifest files is we will eventually need to push logs to:

  1. Development Elasticsearch cluster
  2. QA/Test Elasticsearch cluster (different host than the development one)

A manifest file only allows one Elasticsearch host to be configured at a time.