S3 Filebeat Input error

There is an error within filebeat plugin after integrating aws sqs with filebeat. The error occured after receivng this message: handleSQSMessage failed: json unmarshal sqs message body failed: invalid character 'e' in literal true (expecting 'r').

Using filebeat version 7.9.0 (latest).

The logs :


26T10:05:09.712Z        DEBUG   [processors]    processing/processors.go:187    Publish event: { ... }
2020-08-26T10:05:09.713Z        DEBUG   [s3]    s3/input.go:269 handleS3Objects succeed 
2020-08-26T10:05:10.714Z        DEBUG   [logstash]      logstash/async.go:172   2 events out of 2 events sent to logstash host  Continue sending
2020-08-26T10:05:10.723Z        DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [1: 0, 2]
2020-08-26T10:05:10.724Z        DEBUG   [publisher]     memqueue/eventloop.go:535       broker ACK events: count=2, start-seq=3, end-seq=4

2020-08-26T10:05:10.724Z        DEBUG   [acker] beater/acker.go:64      stateless ack   {"count": 2}
2020-08-26T10:05:10.724Z        DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:2
2020-08-26T10:05:10.724Z        DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack
2020-08-26T10:05:10.724Z        DEBUG   [s3]    s3/input.go:290 Deleting message from SQS: 0xc00053a190
2020-08-26T10:05:10.780Z        DEBUG   [s3]    s3/input.go:240 Processing 1 messages
2020-08-26T10:05:10.780Z        ERROR   [s3]    s3/input.go:257 handleSQSMessage failed: json unmarshal sqs message body failed: invalid character 'e' in literal true (expecting 'r')
2020-08-26T10:05:13.837Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:05:23.837Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:05:33.837Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:05:43.837Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:05:53.837Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:06:03.838Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:06:13.838Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:06:23.838Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:06:33.838Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:06:43.838Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:06:53.839Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:07:03.839Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:07:13.839Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:07:23.839Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:07:33.840Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:07:40.780Z        WARN    [s3]    s3/input.go:299 Half of the set visibilityTimeout passed, visibility timeout needs to be updated
2020-08-26T10:07:40.943Z        INFO    [s3]    s3/input.go:306 Message visibility timeout updated to 300 seconds
2020-08-26T10:07:43.840Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:07:53.840Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:08:03.840Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:08:13.840Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:08:23.841Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:08:33.841Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:08:43.841Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:08:53.841Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:09:03.842Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:09:13.832Z        DEBUG   [input] input-cursor/clean.go:59        Start store cleanup     {"input_type": "o365audit"}
2020-08-26T10:09:13.832Z        DEBUG   [input] input-cursor/clean.go:68        No entries to remove were found {"input_type": "o365audit"}
2020-08-26T10:09:13.832Z        DEBUG   [input] input-cursor/clean.go:69        Done store cleanup      {"input_type": "o365audit"}
2020-08-26T10:09:13.842Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:09:23.842Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:09:33.842Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:09:43.842Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:09:53.843Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:10:03.843Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:10:10.943Z        WARN    [s3]    s3/input.go:299 Half of the set visibilityTimeout passed, visibility timeout needs to be updated
2020-08-26T10:10:11.025Z        INFO    [s3]    s3/input.go:306 Message visibility timeout updated to 300 seconds
2020-08-26T10:10:13.843Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:10:23.843Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:10:33.844Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:10:43.844Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:10:53.844Z        DEBUG   [input] input/input.go:139      Run input
2020-08-26T10:11:03.845Z        DEBUG   [input] input/input.go:139      Run input

How to fix this issue?

Thanks in advance.

Hi! Thanks for posting here! This looks like a bug to me. Probably the SQS message is in a different format than what we are supporting in Filebeat S3 input. Do you have a SQS message that you can copy paste here for me to try reproduce the issue locally? Which service the log is coming from? Thank you!!

Hi,
Sure! the SQS message body:

    {
      "Records": [
        {
          "eventVersion": "2.1",
          "eventSource": "aws:s3",
          "awsRegion": "region-x",
          "eventTime": "2020-08-27T11:35:01.432Z",
          "eventName": "ObjectCreated:Put",
          "userIdentity": {
            "principalId": "AWS:"
          },
          "requestParameters": {
            "sourceIPAddress": "xxx:xxx"
          },
          "responseElements": {
            "x-amz-request-id": "xxxxxH2xx9xxxx",
            "x-amz-id-2": "xxxxxxxxxqxxPBD4v7R0JxxxxxxxxxxxB"
          },
          "s3": {
            "s3SchemaVersion": "1.0",
            "configurationId": "logforwarderfilebeatxxx",
            "bucket": {
              "name": "bucketname",
              "ownerIdentity": {
                "principalId": "A2Qxxxxxxxx"
              },
              "arn": "arn:aws:s3:::bucketname"
            },
            "object": {
              "key": "AWSLogs/xxxxxx/elasticloadbalancing/region -x/2020/08/27/account_id_elasticloadbalancing_region_x_app.elasticloadbalancerLogs.log.gz",
              "size": 2519,
              "eTag": "c5632axxxx",
              "sequencer": "005F479xxxxxx"
            }
          }
        }
      ]

The logs are from 'elastic load balancer', the flow is elastic load balancer(access logs) -->S3 --> SQS --> filebeat --> kibana. At first the message where successfully processed and forwarded to kibana, but in second test we face this issue.

I hope this info helps in fixing the issue.

Thank you!

Hi, do we have an update here?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.

Sorry I've trying to reproduce this issue and no luck still... I tested with the sqs message you gave me and it seems fine going through s3 input on my side. I suspect randomly AWS SQS message has invalid json body. I created a github issue tracking the investigation for this problem: https://github.com/elastic/beats/issues/21726

Sorry again for the delay!!