kubernetes.labels represents a lot of clutter that isn't useful to me. Add Kubernetes metadata | Filebeat Reference [8.12] | Elastic explains how to drop labels from associated resources, but I can't work out how to avoid including labels themselves. Is drop_fields with regex the only way?
I can't get drop_fields with regex to work, either. I was expecting /kubernetes\.labels\..*/ and /agent\..*/ to delete all subkeys under kubernetes.labels and agent, but it doesn't seem to. I don't know how drop_fields interacts with hierarchical entries.
Hello @Ananym ,
Can you try the following block
- add_resource_metadata:
namespace:
enabled: false
node:
enabled: false
The above will disable the metadata enrichment for node and namespace, so relevant kubernetes kubernetes.node.labels.* and kubernetes.namespace_labels.*
Additionally if you use the kubernetes provider in filebeat autodiscovery like here you can use
providers:
- type: kubernetes
exclude_labels: ....
Also the following processor should work I have tested it:
- drop_fields:
fields:
- kubernetes.labels
Let me know which of those helped you?
to be clear, include_labels: works for the node metadata, I was just hoping to see an equivalent for pod metadata.
I definitely have kubernetes.labels included as an entry in my drop_fields processor but I'm still seeing them. (Some) Other fields in that array are being dropped appropriately. Drop fields is definitely positioned after add_kubernetes_metadata.
Plese use include_labels also in the provider level. This will include only the pods labels specified
See the updated latest docs here : Autodiscover | Filebeat Reference [master] | Elastic
"If the include_labels config is added to the provider config, then the list of labels present in the config will be added to the event."