It looks like under some conditions filebeat on Kubernetes is failing to add metadata.
I think there's a race here between the log harvester picking up new container logs and the kubernetes pod watcher populating the metadata cache with the new container.
It's very hard to pin down because it is intermittent and silent unless debug logging is turned on.
I did manage to catch the following logs out of Filebeat though:
2023-05-05T02:35:16.382Z Incoming log.file.path value: /var/log/containers/webapp-microapps-7957dc478b-bq5nt_straya_istio-proxy-6d944fc64ee9444d28569c1bf52adebf6597bc0263e094702a79199dd13d4480.log
2023-05-05T02:35:16.382Z Using container id: 6d944fc64ee9444d28569c1bf52adebf6597bc0263e094702a79199dd13d4480
2023-05-05T02:35:16.382Z Using the following index key 6d944fc64ee9444d28569c1bf52adebf6597bc0263e094702a79199dd13d4480
2023-05-05T02:35:16.382Z Index key 6d944fc64ee9444d28569c1bf52adebf6597bc0263e094702a79199dd13d4480 did not match any of the cached resources
In a large environment these logs are essentially lost without the kubernetes metadata to tie them to a specific pod.
It would be good to have a mechanism to queue / retry the metadata lookup for a time to allow the watcher to populate the data.
Or block and actively refresh the cache
This is on Filebeat 8.7.1 fwiw
Maybe related?