Using Filebeat to fetch CrowdStrike Falcon Data Replicator (FDR) logs with S3 SQS

I'm interested in using Filebeat to fetch CrowdStrike Falcon Data Replicator (FDR) logs with the aws-s3 plugin, and use its parallel processing functionality due to the sheer volume of data FDR produces. But I'm running into issues just getting one filebeat instance to successfully fetch the data. Here's my config:

- type: aws-s3
  enabled: true
  shared_credential_file: /etc/filebeat/fdr_credentials
  max_number_of_messages: 10

#============================= Filebeat modules ===============================

  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

#==================== Elasticsearch template setting ==========================

  index.number_of_shards: 1

#================================ General =====================================

tags: ["FDR"]

#================================ Outputs =====================================

  hosts: ["localhost:5044"]

  ssl.certificate_authorities: ["/etc/filebeat/ssl/ca.crt.pem"]
  ssl.certificate: "/etc/filebeat/ssl/beats.crt.pem"
  ssl.key: "/etc/filebeat/ssl/beats.key.pem"

#================================ Logging =====================================

logging.level: info

But when I run it, I get the following:

<html xmlns="" xml:lang="en" lang="en">
  <title>404 - Not Found</title>
  <h1>404 - Not Found</h1>
        {"queue_url": "$CUSTOMER_ID/$QUEUE", "region": "us-west-1"}
2021-06-18T13:56:11.834Z        INFO    beater/filebeat.go:515  Stopping filebeat
2021-06-18T13:56:11.834Z        INFO    beater/crawler.go:148   Stopping Crawler
2021-06-18T13:56:11.834Z        INFO    beater/crawler.go:158   Stopping 1 inputs
2021-06-18T13:56:11.834Z        INFO    cfgfile/reload.go:227   Dynamic config reloader stopped
2021-06-18T13:56:11.834Z        INFO    [crawler]       beater/crawler.go:163   Stopping input: 12213181681190882779
2021-06-18T13:56:11.834Z        INFO    []  compat/compat.go:132    Input 'aws-s3' stopped
2021-06-18T13:56:11.834Z        INFO    beater/crawler.go:178   Crawler stopped
2021-06-18T13:56:11.834Z        INFO    [registrar]     registrar/registrar.go:132      Stopping Registrar
2021-06-18T13:56:11.834Z        INFO    [registrar]     registrar/registrar.go:166      Ending Registrar
2021-06-18T13:56:11.835Z        INFO    [registrar]     registrar/registrar.go:137      Registrar stopped
2021-06-18T13:56:11.836Z        ERROR   []  awss3/collector.go:99   SQS ReceiveMessageRequest failed: EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make Client request
caused by: <?xml version="1.0" encoding="iso-8859-1"?>

I know the creds and the SQS queue are right because if I use 'aws' on the command line, I can see all of the dirs and gzipped files in the bucket. What am I doing wrong?

What credentials are you using? You haven't defined anything in the input config so it looks like it's using the default which u don't have set up.

Thanks for your reply. I'm using the creds that are defined in "shared_credential_file: /etc/filebeat/fdr_credentials".

I think my issue is that I'm just looking for content in the gzipped files that are being written to the S3 bucket, not cloudtrail, cloudwatch, ec2, elb, s3access, or vpcflow logs.

Thanks again

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.