Hi
I am having troubles getting the Filebeat AWS module to work with CloudTrail logs. I believe I have followed everything correctly as per the documentation but can't seem to get it working.
All components are running v7.6.2 - Filebeat, Elasticsearch and Kibana and running on-prem rather than in the cloud.
My filebeat.yml file has this configuration relating to the S3 Input:
filebeat.inputs:
- type: s3
shared_credential_file: /root/.aws/credentials
credential_profile_name: default
queue_url: https://sqs.eu-west-2.amazonaws.com/12345678/QueueName
expand_event_list_from_field: Records
visibility_timeout: 300
My aws.yml file has the CloudTrail section enabled, all other bits (S3Access etc.) are set to disabled.
cloudtrail:
enabled: true
var.queue_url: https://sqs.eu-west-2.amazonaws.com/12345678/QueueName
var.shared_credential_file: /root/.aws/credentials
var.credential_profile_name: default
I believe this is all ok. As per the documentation, CloudTrail is delivered in json format so I need the expand_event_list_from_field: Records
line. I can get Logstash to pull data from this S3 bucket but I want Filebeats to process it so its all ECS friendly. Using Logsash, I can also verify that the data is there, delivered in json format etc. so I don't think the problem is the AWS end.
I also have proven Filebeats as I have the Palo panw module working with this same instance so the whole Filebeat and Elastic configuration is ok.
When I run Filebeat, the main error I see when I run journalctl -xeu filebeat is:
ERROR [s3] s3/input.go:254 handleSQSMessage failed: json unmarshal sqs message body failed: invalid character 'e' in literal true (expecting 'r')
After this I usually get a visibility timeout related message but I believe that is because the log is failing to be processed. I have turned on debug logging and don't see anything different relating to S3 and AWS and I'm now out of ideas.
Any help appreciated.