ECK Filebeat, ID duplicate for containers in the same namespace as Filebeat

Hello team.

I encountered a problem (ECK installation) with Beats/Filebeat (autodiscover Kubernetes mode),

{"log.level":"error","@timestamp":"2025-10-27T11:34:59.854Z","log.logger":"input","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.(*InputManager).Create","file.name":"input-logfile/manager.go","file.line":201},"message":"filestream input ID 'container-fb193882c567f2d9c6d41a3c5db2d183123a24202f3add98ae51dec7b1e21cbd' is duplicated: input will NOT start","service.name":"filebeat","input.cfg":"{\n  \"_fileset_name\": \"server\",\n  \"_module_name\": \"elasticsearch\",\n  \"exclude_files\": [\n    \".gz$\",\n    \"_slowlog.log$\",\n    \"_access.log$\",\n    \"_deprecation.log$\",\n    \"gc.log$\",\n    \"_audit.log$\"\n  ],\n  \"id\": \"container-fb193882c567f2d9c6d41a3c5db2d183123a24202f3add98ae51dec7b1e21cbd\",\n  \"multiline\": {\n    \"match\": \"after\",\n    \"negate\": true,\n    \"pattern\": \"^(\\\\[[0-9[]{4}-[0-9[]{2}-[0-9]{2}|{)\"\n  },\n  \"parsers\": [\n    {\n      \"container\": {\n        \"format\": \"auto\",\n        \"stream\": \"all\"\n      }\n    }\n  ],\n  \"path\": {\n    \"config\": \"/usr/share/filebeat\",\n    \"data\": \"/usr/share/filebeat/data\",\n    \"home\": \"/usr/share/filebeat\",\n    \"logs\": \"/usr/share/filebeat/logs\"\n  },\n  \"paths\": [\n    \"/var/log/containers/*-fb193882c567f2d9c6d41a3c5db2d183123a24202f3add98ae51dec7b1e21cbd.log\"\n  ],\n  \"pipeline\": \"filebeat-9.3.0-elasticsearch-server-pipeline\",\n  \"processors\": [\n    {\n      \"add_locale\": {\n        \"when\": {\n          \"not\": {\n            \"regexp\": {\n              \"message\": \"^{\"\n            }\n          }\n        }\n      }\n    },\n    {\n      \"add_fields\": {\n        \"fields\": {\n          \"ecs\": {\n            \"version\": \"1.12.0\"\n          }\n        },\n        \"target\": \"\"\n      }\n    }\n  ],\n  \"prospector\": {\n    \"scanner\": {\n      \"symlinks\": true\n    }\n  },\n  \"type\": \"filestream\"\n}","ecs.version":"1.6.0"}
...

{"log.level":"error","@timestamp":"2025-10-27T11:34:59.854Z","log.logger":"autodiscover.autodiscover.cfgfile","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cfgfile.(*RunnerList).Reload","file.name":"cfgfile/list.go","file.line":136},"message":"Error creating runner from config: failed to create input: ErrNonReloadable: filestream input with ID 'container-fb193882c567f2d9c6d41a3c5db2d183123a24202f3add98ae51dec7b1e21cbd' already exists, this will lead to data duplication, please use a different ID","service.name":"filebeat","ecs.version":"1.6.0"}

:red_circle: Errors only appear on containers that are on the same Kubernetes namespace (elastic-stack) as filebeat (Elasticsearch, Kibana, but not Logstash :upside_down_face: ).

elasticsearch-es-default-0_elastic-stack_elastic-internal-init-filesystem-f20f45c960769c6024184c2cc6f34c7f65a9efe645ebe4b801798b5f26d03e30.log

elasticsearch-es-default-0_elastic-stack_elastic-internal-suspend-fb193882c567f2d9c6d41a3c5db2d183123a24202f3add98ae51dec7b1e21cbd.log

elasticsearch-es-default-0_elastic-stack_elasticsearch-0ac86d335741117f4517a0cdb90c317d4ca664ca69f6cf777bdd15e164af190d.log

es-quickstart-eck-kibana-kb-bdb69cf-g8nwg_elastic-stack_kibana-45b377721834d4b127047a432fbcec7d4c5ba2c08c0e80d81b4538b82bac66b1.log

Cluster,

ECK
elastic-operator 3.1.0
eck-stack 0.18.0
elasticsearch/filebeat version 9.3.0

Kubernetes
v1.21.6

Filebeat configuration,

config:
  filebeat.registry.flush: 10s
  filebeat.autodiscover:
    providers:
      - type: kubernetes
        include_annotations: ["abc"]
        node: ${NODE_NAME}
        hints.enabled: true
        hints.default_config:
          type: filestream
          id: container-${data.kubernetes.container.id}
          prospector.scanner.symlinks: true
          parsers:
            - container:
                format: auto
          paths:
            - /var/log/containers/*-${data.kubernetes.container.id}.log
  output.logstash:
    hosts: ["es-quickstart-eck-logstash-ls-beats.elastic-stack.svc:5044"]

I would appreciate any hellp!

Hi @kee8veiN ,

Are you using any modules? Is it really the full config or there might be something you left out?

We know there is an issue with k8s autodiscover and modules: Autodiscover causes `filestream input with ID 'xxx' already exists` when used with Nignx or other modules that enables multiple filesets · Issue #44443 · elastic/beats · GitHub . There are a few options of workaround there. Perhaps this can help you?

There is also Filebeat hints based autodiscover does not work well with modules · Issue #47190 · elastic/beats · GitHub , which is just a more genetic and detailed version of the same issue.

Hello @AndersonQ ,

We know there is an issue with k8s autodiscover and modules: Autodiscover causes `filestream input with ID 'xxx' already exists` when used with Nignx or other modules that enables multiple filesets · Issue #44443 · elastic/beats · GitHub . There are a few options of workaround there. Perhaps this can help you?

Thank you for the reply, it works!

Are you using any modules?

No

The current Filebeat configuration,

config:
  filebeat.autodiscover:
    providers:
    - type: kubernetes
      node: ${NODE_NAME}
      templates:
      - config:
          - module: nginx
            access:
              input:
                type: filestream
                id: container-${data.kubernetes.container.id}
                paths:
                  - /var/log/containers/*-${data.kubernetes.container.id}.log
                parsers:
                  - container:
                      format: "auto"
                      stream: "stdout"
                prospector.scanner.symlinks: true
  output.logstash:
    hosts: ["es-quickstart-eck-logstash-ls-beats.elastic-stack.svc:5044"]

Good to know it helped :slight_smile: