How to migrate properly to filestream (8.15.1)

Hello,

I'm not managing to migrate to filestream instead of log input.

Can you give me a hand, please!

When I get out of my configuration I stop receiving logs in kibana.

My current configuration that is working but seems deprecated (I'm using a helm installation of filebeat daemonset type in kubernetes)

Current:

  filebeatConfig:
    filebeat.yml: |
      logging.level: ${debug_level}
      filebeat.autodiscover:
        providers:
          - type: kubernetes
            node: $${NODE_NAME}
            hints.enabled: true
            hints.default_config:
              type: container
              paths:
                - /var/log/containers/*$${data.kubernetes.container.id}.log
              exclude_lines: '^[[:space:]]*$'
              multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
              multiline.negate: false
              multiline.match: after
      processors:
        - add_cloud_metadata:
        - add_host_metadata:
        - decode_json_fields:
            fields: ["message"]
            target: "fb"
            overwrite_keys: true          
        - add_fields:
            target: ''
            fields:
              gkeclustername: ${gke_cluster}
              environment: ${environment}

      cloud.id: '$${ELASTIC_CLOUD_ID}'
      cloud.auth: '$${ELASTIC_CLOUD_AUTH}'

      setup.template.enabled: true
      setup.ilm.rollover_alias: "${index_prefix}-%%{[agent.version]}"
      setup.template.name: "${index_prefix}-%%{[agent.version]}"
      setup.template.pattern: "${index_prefix}-%%{[agent.version]}*"

      setup.template.settings:
        index.number_of_shards: ${number_of_shards}
        index.number_of_replicas: ${number_of_replicas}

      output.elasticsearch:
        hosts: ['$${ELASTICSEARCH_HOST:elasticsearch}:$${ELASTICSEARCH_PORT:9200}']
        index: "${index_prefix}-%%{[agent.version]}"

New one:

  filebeatConfig:
    filebeat.yml: |
      logging.level: ${debug_level}
      filebeat.autodiscover:
        providers:
          - type: kubernetes
            node: $${NODE_NAME}
            hints.enabled: true
            hints.default_config:
              type: filestream
              id: container-log-$${data.kubernetes.pod.name}-$${data.kubernetes.container.id}
              paths:
                - /var/log/containers/*$${data.kubernetes.container.id}.log
              exclude_lines: '^[[:space:]]*$'
              multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
              multiline.negate: false
              multiline.match: after
      processors:
        - add_host_metadata:
        - decode_json_fields:
            fields: ["message"]
            target: "fb"
            overwrite_keys: true
            max_depth: 4
        - add_fields:
            target: ''
            fields:
              gkeclustername: ${gke_cluster}
              environment: ${environment}

      cloud.id: '$${ELASTIC_CLOUD_ID}'
      cloud.auth: '$${ELASTIC_CLOUD_AUTH}'

      setup.template.enabled: true
      setup.ilm.rollover_alias: "${index_prefix}-%%{[agent.version]}"
      setup.template.name: "${index_prefix}-%%{[agent.version]}"
      setup.template.pattern: "${index_prefix}-%%{[agent.version]}*"

      setup.template.settings:
        index.number_of_shards: ${number_of_shards}
        index.number_of_replicas: ${number_of_replicas}

      output.elasticsearch:
        hosts: ['$${ELASTICSEARCH_HOST:elasticsearch}:$${ELASTICSEARCH_PORT:9200}']
        index: "${index_prefix}-%%{[agent.version]}"

I don't understand why they are not being sent, I haven't seen any communication problems in the logs either, nor in the kibana part they are not being written in any other index, they just don't arrive.

Should I change anything?

Thank you very much

Hi @Dani_Perez ,

1 - Was the previous configuration (with the log input) working correctly, and were all logs received in Kibana? If so, was the switch to filestream the only change made?

2 - Did you enable the debug log level in Filebeat to check for any error or warning messages? Do the Filebeat logs show any activity with filestream or indicate that the files are being monitored?

Hi Alex,

  1. Yes the previous is working correctly, only I'm trying to switch to filestream.

  2. Yes I was trying with debug but no error or warning, I'm not sure that log is showing that the files are being monitored...

some logs

filebeat-staging-filebeat-2lggz filebeat 2024-09-25T15:02:11.616758166+02:00 {"log.level":"debug","@timestamp":"2024-09-25T13:02:11.616Z","log.logger":"file_watcher","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/filestream.(*fileWatcher).watch","file.name":"filestream/fswatch.go","file.line":229},"message":"File scan complete","service.name":"filebeat","total":0,"written":0,"truncated":0,"renamed":0,"removed":0,"created":0,"ecs.version":"1.6.0"}
filebeat-staging-filebeat-cmrr4 filebeat 2024-09-25T15:02:11.654623803+02:00 {"log.level":"debug","@timestamp":"2024-09-25T13:02:11.654Z","log.logger":"file_watcher","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/filestream.(*fileWatcher).watch","file.name":"filestream/fswatch.go","file.line":125},"message":"Start next scan","service.name":"filebeat","ecs.version":"1.6.0"}
filebeat-staging-filebeat-cmrr4 filebeat 2024-09-25T15:02:11.654681987+02:00 {"log.level":"debug","@timestamp":"2024-09-25T13:02:11.654Z","log.logger":"file_watcher","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/filestream.(*fileWatcher).watch","file.name":"filestream/fswatch.go","file.line":125},"message":"Start next scan","service.name":"filebeat","ecs.version":"1.6.0"}

filebeat-staging-filebeat-cmrr4 filebeat 2024-09-25T15:02:11.656999852+02:00 {"log.level":"debug","@timestamp":"2024-09-25T13:02:11.656Z","log.logger":"scanner","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/filestream.(*fileScanner).GetFiles","file.name":"filestream/fswatch.go","file.line":388},"message":"cannot create an ingest target for file \"/var/log/containers/tt-frontend-76f6ddf54-6wlzj_default_tt-frontend-b00f7a076136789c3d824598f65885ea3756dab4bf668b1e2564a7c4b72ad59e.log\": file \"/var/log/containers/tt-frontend-76f6ddf54-6wlzj_default_tt-frontend-b00f7a076136789c3d824598f65885ea3756dab4bf668b1e2564a7c4b72ad59e.log\" is a symlink and they're disabled","service.name":"filebeat","ecs.version":"1.6.0"}

some more this time with the identifier, thought they were different with the new configuration but still complaining

filebeat-staging-filebeat-6r5w5 filebeat 2024-09-25T15:08:47.455957867+02:00 {"log.level":"error","@timestamp":"2024-09-25T13:08:47.455Z","log.logger":"input","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.(*InputManager).Create","file.name":"input-logfile/manager.go","file.line":174},"message":"filestream input with ID 'container-log-mar-users-5977f56875-p29js-d167a0492001ac917d0b6f89c5ee888454e8b38c028de1292ea5f52ed48d17d4' already exists, this will lead to data duplication, please use a different ID. Metrics collection has been disabled on this input.","service.name":"filebeat","ecs.version":"1.6.0"}
filebeat-staging-filebeat-zk4bn filebeat 2024-09-25T15:08:48.670613631+02:00 {"log.level":"error","@timestamp":"2024-09-25T13:08:48.670Z","log.logger":"input","log.origin":{"function":"github.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.(*InputManager).Create","file.name":"input-logfile/manager.go","file.line":174},"message":"filestream input with ID 'container-log-tt-documents-6c7bcc4b45-dc94j-21a2704e720632ed3ef7dfc026d43b3444e0590b76eaf16883e7bb2b01ffc38d' already exists, this will lead to data duplication, please use a different ID. Metrics collection has been disabled on this input.","service.name":"filebeat","ecs.version":"1.6.0"}