I'm running filebeat as a daemonset on kubernetes using the helm3 stable/filebeat chart. If you're fine with that setup, you can include k8s metadata in your index names pretty easily. First you'll need to enable the add_kubernetes_metadata processor (https://www.elastic.co/guide/en/beats/filebeat/6.8/add-kubernetes-metadata.html).
processors:
- add_kubernetes_metadata:
in_cluster: true
From there, slap a label on your pods, I'll use an arbitrary example of a label called "index-suffix". Then, in your index specification in your filebeat config, you can then have something like:
...
index: "filebeat-%{[kubernetes.pod.labels.index-suffix]}-MoreIndexSuffixes" `
...
I'd recommend you set this value with a ternary though. Consider instead:
...
index: "filebeat-%{[kubernetes.pod.labels.index-suffix]:Default}-MoreIndexSuffixes"
...
This will use the "index-suffix" label for a given pod if it can find it, and the literal string "Default" if it can't.
Be warned: I've seen some weird interactions with pods that have just started up if you're using the "Container" filebeat input, and presumably you'll have the same issue if you are directly watching a specific directory as well. The problem I ran into was that filebeat didn't always have access to a freshly started pod's metadata, but it did have immediate access to the log file, so the first few lines of each log would show up in the default index. My fix was to use filebeat's kubernetes autodiscover provider (https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html#_kubernetes) so we rely on k8s to tell us when and where to look for logs.