Thanks for the continued follow-up — I did more testing and collected additional details.
logging.level: debug
logging.selectors: "*"
fields:
"cluster_id": "test"
"cluster": "test"
filebeat.inputs:
- type: filestream
id: k8s-app
close.on_state_change.inactive: 1m
close.on_state_change.removed: true
ignore_inactive: since_last_start
clean_inactive: 2h
ignore_older: 70m
paths:
- '/var/log/containers/*.log'
parsers:
- container: ~
- multiline:
type: pattern
pattern: '^\s'
negate: false
match: after
max_lines: 500
timeout: 1s
prospector:
scanner:
fingerprint.enabled: true
symlinks: true
exclude_files: ['filebeat-.*\.log']
file_identity.fingerprint: ~
processors:
- add_kubernetes_metadata:
#host: ${NODE_NAME}
#matchers:
# - logs_path:
# logs_path: "/var/log/containers/"
processors:
- drop_event:
when:
or:
- and:
- equals:
kubernetes.namespace: "kube-system"
- not:
equals:
kubernetes.container.name: "controller"
- equals: { kubernetes.namespace: "default" }
- equals: { kubernetes.namespace: "kube-vm" }
- equals: { kubernetes.namespace: "gitlab-runner" }
- equals: { kubernetes.namespace: "kube-node-lease" }
- equals: { kubernetes.namespace: "kube-flannel" }
- equals: { kubernetes.namespace: "argocd" }
- equals: { kubernetes.container.name: "logstash" }
- equals: { kubernetes.container.name: "filebeat" }
output.kafka:
enable: true
hosts: ["ip1:9092","ip2:9092","ip3:9092"]
topic: "no-topic"
topics:
- topic: "%{[kubernetes.namespace]}"
when.has_fields: ['kubernetes.namespace']
key: '%{[kubernetes.pod.uid]}'
required_acks: 1
worker: 10
compression: gzip
max_message_bytes: 10000000
If I remove the matcher from add_kubernetes_metadata, then all logs are sent to the fallback topic (no-topic), and I noticed none of the events contain Kubernetes metadata fields (no kubernetes.*).
At the same time I see this debug log continuously:
{"log.level":"debug","log.logger":"kubernetes","message":"log.file.path value does not contain matcher's logs_path '/var/lib/docker/containers/', skipping..."}
Even though my container logs come from:
/var/log/containers/*.log
Current config where metadata is missing
(working but without matcher)
processors:
- add_kubernetes_metadata:
#host: ${NODE_NAME}
#matchers:
# - logs_path:
# logs_path: "/var/log/containers/"
In this config, all logs go to no-topic, because kubernetes.namespace does not exist.
When I restore the matcher
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.kafka:
enable: true
hosts: ["ip1:9092","ip2:9092","ip3:9092"]
topic: '%{[kubernetes.namespace]}'
I again start seeing:
Dropping event: no topic could be selected
and the frequency becomes very stable — around every 20 seconds (timestamps below). So it is not only happening right after startup.
2025-12-02T04:31:23.375Z
2025-12-02T04:31:33.382Z
2025-12-02T04:31:53.390Z
2025-12-02T04:32:13.391Z
2025-12-02T04:32:33.393Z
2025-12-02T04:32:53.397Z
2025-12-02T04:33:03.399Z
2025-12-02T04:33:43.411Z
2025-12-02T04:33:53.405Z
...
It looks like something periodically fails to assign metadata → topic interpolation fails → event is dropped.
One more update that might be helpful:
I checked another Kubernetes cluster where Filebeat is still 7.10.2.
There, the Dropping event: no topic could be selected error also exists — but not periodically.
Example logs from that cluster:
2025-11-29T20:37:26.961Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T00:42:27.675Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T01:23:08.846Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T05:13:10.348Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T05:44:00.542Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T08:48:21.226Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T09:29:01.907Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T09:29:01.908Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
2025-11-30T14:46:04.003Z ERROR [kafka] kafka/client.go:147 Dropping event: no topic could be selected
So the “once every ~20 seconds” pattern I’m currently seeing on 9.2.1 may not be fundamental — the frequency might depend on workload or metadata availability rather than a fixed scheduler. I wanted to clarify that to avoid misleading conclusions.
For me, Dropping event: no topic could be selected is 100% reproducible, regardless of restart or load.
Given how common K8s + Kafka + metadata-based routing is, I am very surprised I don’t see other users reporting it.
I’m happy to test any config suggestion, collect more debug logs, or run a custom build if needed.
Thanks again for the time and help!