I'm interested in using Filebeat to fetch CrowdStrike Falcon Data Replicator (FDR) logs with the aws-s3 plugin, and use its parallel processing functionality due to the sheer volume of data FDR produces. But I'm running into issues just getting one filebeat instance to successfully fetch the data. Here's my config:
filebeat.inputs:
- type: aws-s3
enabled: true
queue_url: https://sqs.us-west-1.amazonaws.com/$CUSTOMER_ID/$QUEUE
shared_credential_file: /etc/filebeat/fdr_credentials
max_number_of_messages: 10
#============================= Filebeat modules ===============================
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#================================ General =====================================
tags: ["FDR"]
#================================ Outputs =====================================
output.logstash:
hosts: ["localhost:5044"]
ssl.certificate_authorities: ["/etc/filebeat/ssl/ca.crt.pem"]
ssl.certificate: "/etc/filebeat/ssl/beats.crt.pem"
ssl.key: "/etc/filebeat/ssl/beats.key.pem"
#================================ Logging =====================================
logging.level: info
But when I run it, I get the following:
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>404 - Not Found</title>
</head>
<body>
<h1>404 - Not Found</h1>
</body>
</html>
{"queue_url": "https://sqs.us-west-1.amazonaws.com/$CUSTOMER_ID/$QUEUE", "region": "us-west-1"}
2021-06-18T13:56:11.834Z INFO beater/filebeat.go:515 Stopping filebeat
2021-06-18T13:56:11.834Z INFO beater/crawler.go:148 Stopping Crawler
2021-06-18T13:56:11.834Z INFO beater/crawler.go:158 Stopping 1 inputs
2021-06-18T13:56:11.834Z INFO cfgfile/reload.go:227 Dynamic config reloader stopped
2021-06-18T13:56:11.834Z INFO [crawler] beater/crawler.go:163 Stopping input: 12213181681190882779
2021-06-18T13:56:11.834Z INFO [input.aws-s3] compat/compat.go:132 Input 'aws-s3' stopped
2021-06-18T13:56:11.834Z INFO beater/crawler.go:178 Crawler stopped
2021-06-18T13:56:11.834Z INFO [registrar] registrar/registrar.go:132 Stopping Registrar
2021-06-18T13:56:11.834Z INFO [registrar] registrar/registrar.go:166 Ending Registrar
2021-06-18T13:56:11.835Z INFO [registrar] registrar/registrar.go:137 Registrar stopped
2021-06-18T13:56:11.836Z ERROR [input.aws-s3] awss3/collector.go:99 SQS ReceiveMessageRequest failed: EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make Client request
caused by: <?xml version="1.0" encoding="iso-8859-1"?>
I know the creds and the SQS queue are right because if I use 'aws' on the command line, I can see all of the dirs and gzipped files in the bucket. What am I doing wrong?