Filebeat 7.6 Kubernetes [ Openshift ] - Missing labels

Hi!

We're trying to run Filebeat on Openshift. We want to create indices based on project-labels. Let's say we have a project label called "system-name". Multiple projects could have the same "system-name" because they are a part of the same system and We don't want to have separate indices for every single project.

Reading trough docs I understood that Filebeat kubernetes providers should get these labels in "data.kubernetes.labels"
Reading trough docs https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html

{
   "host": "172.17.0.21",
   "port": 9090,
  "kubernetes": {
     "container": {
       "id": "bb3a50625c01b16a88aa224779c39262a9ad14264c3034669a50cd9a90af1527",
       "image": "prom/prometheus",
       "name": "prometheus"
     },
     "labels": {
        "project": "prometheus",
       ...
     },
     "namespace": "default",
     "node": {
       "name": "minikube"
     },
     "pod": {
       "name": "prometheus-2657348378-k1pnh"
     }
   },
}

Sadly, after sending whole "data.kubernetes" JSON field, it seems We even got "data.kubernetes.pod.labels" but not "data.kubernetes.labels".
Is that a known error on Openshift?

Also, we are using Hints based autodiscover, config looks like this:

 filebeat.autodiscover:
   providers:
     - type: kubernetes
       node: ${NODE_NAME}
       hints.enabled: true
       hints.default_config:
         type: container
         paths:
           - /var/log/containers/*${data.kubernetes.container.id}.log
         processors:
           - drop_event:
               when:
                  equals:
                    kubernetes.namespace: kube-system
           - add_labels:
               labels:
                 osh_cluster: test
                 namespace_all: '${data.kubernetes}'
                 # sadly, no "data.kubernetes.labels" are received by Elasticsearch
 filebeat.modules:
 - module: haproxy
   log:
     enabled: true
     var.paths: ["/var/log/haproxy.log"]
     var.input: "file"
 
 processors:
   - add_cloud_metadata:
   - add_host_metadata:
   
 cloud.id: ${ELASTIC_CLOUD_ID}
 cloud.auth: ${ELASTIC_CLOUD_AUTH}
 
 output.logstash:
   hosts: ["#"]
   loadbalance: true

Try to enable debug logging for all selectors and see if Kubernetes doesn't report any problems.

Sadly, no errors were recorde and Filebeat is showing only things we already know like: lists of labels it is going to add to the event. Those lists of course do not contain labels we are looking for (project/namespace labels).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.