Hello,
I'm trying to get the S3 input configured to ingest Cloudfront logs. Problem is, regardless of what settings I use, it runs into a log in can't process and gives me:
handleSQSMessage failed: json unmarshal sqs message body failed: invalid character 'd' in literal false (expecting 'a') 
The message is always the same so I'm guessing it keeps failing on the same message or one of a similar format.
When this happens, it stops ingesting. Filebeat does not crash or give an error or anything. It just stops and logs are filled with:
2020-07-02T22:12:01.687Z    DEBUG    [input]    input/input.go:152    Run input                                                                                                                                                     
2020-07-02T22:12:08.656Z    WARN    [s3]    s3/input.go:298    Half of the set visibilityTimeout passed, visibility timeout needs to be updated                                                                                     
2020-07-02T22:12:08.741Z    INFO    [s3]    s3/input.go:305    Message visibility timeout updated to 300 seconds                       
Here is my config. I have made it as basic as possible for testing. The same behavior is seen whether I used expand_event_list_from_field or not.
  filebeat.inputs:
      - type: s3
        queue_url: https://sqs.us-east-1.amazonaws.com/328823170987/hotrock-filebeat-cloudfront-logs
        access_key_id: ${AWS_ACCESS_KEY_ID}
        secret_access_key: ${AWS_SECRET_ACCESS_KEY}
        expand_event_list_from_field: Records
For questions:
- 
Is there any way I can track what message it failed on? I went back to the SQS queue, but the oldest is always a normal, properly formatted JSON with a
Recordsas the top level - 
If we can identify the message, is there any way I can filter it before the s3 input tries to parse it as JSON?
 - 
Any other ideas on what could be causing this?