Filebeat AWS module not parsing all log files

Hello,

I am trying to parse AWS vpcflowlogs.

After configuring AWS module with vpcflow fileset, logs started flowing.

With my current traffic condition, AWS generating around 6 million lines of log per 6 hours.
Each logfile/s3 object contain around 80-90 thousand lines of log.

But I noticed a weird thing.

2.7 - 3.0 million logs has been parsed correctly.
Rest of the logs didn't parsed.
but I can see file content in message field.

More specifically, filebeat is not parsing all log files/s3 object.
If filebeat fails to parse a file, it is not parsing a single line from that log file.

Filebeat Version 7.7
There is error/warning message on filebeat log files.

filebeat.yml

filebeat.inputs:
- type: s3
  enabled: true
  queue_url: https://sqs.us-east-1.amazonaws.com/xxxxxxxxxxxx/vpcflowlogs-s3-notifocation

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml

logging:
  metrics.enabled: false

output.elasticsearch:
  hosts: ["es-node-01.xyz.local:9200", "es-node-02.xyz.local:9200", "es-node-03.xyz.local:9200"]

aws.yml

- module: aws
  cloudtrail:
    enabled: false
  cloudwatch:
    enabled: false
  ec2:
    enabled: false
  elb:
    enabled: false
  s3access:
    enabled: false
  vpcflow:
    enabled: true
    var.queue_url: https://sqs.us-east-1.amazonaws.com/xxxxxxxxxx/vpcflowlogs-s3-notifocation
    var.visibility_timeout: 900s

Expecting to parse all log files.
Can you please help me with that ?

Hi All,

Same problem as i happened .

When my filebeat processes a large amount of data in a short period of time, he will not be able to cut the field and put all the required data in the message field!

Currently I open the virtual machine to test filebeat!
when data more then 1000hits per / min field was crashed
when data less then 1000hits per / min field will be processed

Sincerely
Zero

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.