Hey all,
I'm deploying Filebeat in a Kubernetes cluster as a daemonset. I see the following error message in the Filebeat Pod logs:
{
"log.level":"error",
"@timestamp":"2023-07-10T09:18:38.720Z",
"log.logger":"input",
"log.origin":{
"file.name":"input-logfile/manager.go",
"file.line":182
},
"message":"filestream input with ID 'container-abc' already exists, this will lead to data duplication, please use a different ID",
"service.name":"filebeat",
"ecs.version":"1.6.0"
}
In my understanding, the filestream IDs should be scoped to the current node, not to the whole cluster, right? Therefore, the individual pods in the daemonset (which obviously all have the same filestream ID, container-abc
) should not interfere with each other, right?
What else could be the cause for this? So far, I also did not see any data duplication in the log output.
Kindest Regards,
Moritz