Loss of Logs occurred while migrating from log input to filestream input

Hi All
1.We are trying to migrate from existing log input to filestream input in filebeat for Autodiscover disable case.
2.We are using the filebeat,logstash and opensearch as 3pp in our micro-services.
3.This is the template file generated at the time log input

data:
  filebeat.yml: |
    filebeat.inputs:
    - type: log
      paths:
      - /var/lib/docker.log
      - /var/log/pods.log
      fields:
        logplane: "adp-app-logs"
      fields_under_root: true
      close_timeout: "5m"
      processors:
          target_prefix: "kubernetes"
          ignore_failure: true
      - drop_fields:
            fields:
              - "kubernetes.log.file.name"
            ignore_missing: true
    output.logstash:
      hosts: "lt:1234"
      ssl.certificate_authorities: "ca.crt"
      ssl.certificate: "{CERT}"
      ssl.key: "${KEY}"
      ssl.verification_mode: "full"
      ssl.renegotiation: "freely"
      ssl.supported_protocols: ["TLSv1.2", "TLSv1.3"]
      ssl.cipher_suites: []
      bulk_max_size: 2048
      worker: 1
      pipelining: 0
      ttl: 30
      queue.mem:
        flush.timeout: 1s
    filebeat.registry.flush: 5s
    logging.level: "info"
    logging.metrics.enabled: false
    http.enabled: true
    http.host: localhost
    http.port: 1234

4.This is the template file generated at the time filestream input

data:
  filebeat.yml: |
    filebeat.inputs:
    - type: filestream
      paths:
      - /var/lib/docker.log
      - /var/log/pods.log
      fields:
        logplane: "adp-app-logs"
      id: my_ids_1
      take_over: true
      enabled: true
      fields_under_root: true
      close_timeout: "5m"
      processors:
         target_prefix: "kubernetes"
          ignore_failure: true
      - drop_fields:
            fields:
              - "kubernetes.log.file.name"
            ignore_missing: true
    output.logstash:
      hosts: "lt:1234"
      ssl.certificate_authorities: "ca.crt"
      ssl.certificate: "${CERT}"
      ssl.key: "${KEY}"
      ssl.verification_mode: "full"
      ssl.renegotiation: "freely"
      ssl.supported_protocols: ["TLSv1.2", "TLSv1.3"]
      ssl.cipher_suites: []
      bulk_max_size: 2048
      worker: 1
      pipelining: 0
      ttl: 30
      queue.mem:
        flush.timeout: 1s
    filebeat.registry.flush: 5s
    logging.level: "info"
    logging.metrics.enabled: false
    http.enabled: true
    http.host: localhost
    http.port: 1234

I have tested many scenarios like sending logs to opensearch and please find the test results below for the better understanding

input Type Total logs sending from log producer logs per sec total duartion of sending logs Logs Missing (Yes/No) Loss of Logs Count
filestream 10 1 10s No 0
filestream 100 1 100s No 0
filestream 1,000 10 100s No 0
filestream 36,000 100 6 mins No 0
filestream 90,000 100 15 mins No 0
filestream 1,80,000 100 30 mins No 0
filestream 3,60,000 200 30 mins Yes 2,08,640
filestream 3,60,000 200 30 mins Yes 50,390
filestream 2,70,000 150 30 mins Yes 75,984

We would like to know why the loss of logs are happening like so many of the logs are not reaching to opensearch via logstash?
If you need any stats I am open to provide or any other info you need :slight_smile: