Cisco Umbrella selfhosted s3 the queue is not processing

Hello, can not connect Cisco Umbrella from self hosted s3 to filebeat
on cisco.yml is have:

- module: cisco
  umbrella:
  enabled: true
    var.input: s3
    var.queue_url: https://sqs.eu-west-1.amazonaws.com/111111111111/umbrella-tmp-sqs
    var.access_key_id: "${AWS_USR}"
    var.secret_access_key: "${AWS_PWD}"

on filebeat.yml I have:
filebeat.inputs:

- type: log
  enabled: false
  paths:
    - /var/log/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
output.logstash:
  hosts: ["localhost:5044"]
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

what I'm doing wrong ? In debug mode I get:

2021-02-03T19:00:41.028+0200    DEBUG   [input.s3]      s3/collector.go:124     Processing 1 messages   {"queue_url": "https://sqs.eu-west-1.amazonaws.com/111111111111/umbrella-tmp-sqs", "region": "eu-west-1"}
2021-02-03T19:00:41.028+0200    DEBUG   [input.s3]      s3/collector.go:146     handleSQSMessage succeed and returned 0 sets of S3 log info     {"queue_url": "https://sqs.eu-west-1.amazonaws.com/111111111111/umbrella-tmp-sqs", "region": "eu-west-1"}
2021-02-03T19:00:41.029+0200    DEBUG   [input.s3]      s3/collector.go:154     handleS3Objects succeed {"queue_url": "https://sqs.eu-west-1.amazonaws.com/111111111111/umbrella-tmp-sqs", "region": "eu-west-1"}
2021-02-03T19:00:41.029+0200    DEBUG   [input.s3]      s3/collector.go:180     Deleting message from SQS: {"queue_url": "https://sqs.eu-west-1.amazonaws.com/111111111111/umbrella-tmp-sqs", "region": "eu-west-1"}

the message is in the queue but seems it is ignored because it is csv.gz ?

Hello @YegorKovylyayev . I don't think there should be any reason why the messages should be ignored, unless the account related to the access_key and secret does not have access to read the logs.

It might be other debug logs that could be useful compared to only the 4 lines added to the post, would you mind deleting/moving the existing logfile, starting it up again, and let it run for a few minutes and share the output?

Hello @Marius_Iversen
I've created s3, sns, sqs from the same account and I can get logs from aws cli from the same account
here is log Log

Hi @YegorKovylyayev thanks for the log! Looks like Filebeat is able to see the SQS message but not able to see which S3 bucket/log it is pointing to. Do you have the SQS-S3 setup so when a new log entry gets into S3, a SQS message will be created?
If the SQS-S3 setup is good, then is it possible to send us the actual SQS message content?
Also what version of Filebeat are you running? If you could upgrade to the latest version, it will show more debug level message I believe. Thanks!

I have version 7.10.2
sure here is content and if needed I can even reproduce whole setup command line set
sqs message

1 Like

Hmm this is odd! Thank you for the SQS message sample! Filebeat should be able to collect the s3 info from this. Could you upgrade to 7.11.0 filebeat and get some debug level logs please? I tested locally with your SQS message and it was fine to determine what's the S3 info from the message. Thanks!!

upgraded and here is full debug log after: Logs after upgrade
maybe I doing something wrong during configuration steps ? here is how I setup AWS part s3 sqs sns setup
thank you for your help

I even remove filebeat, install from scratch version 7.11 and it is doesen't help

Ahh thanks for sharing the steps you did to setup s3-sqs. I think problem might be there. We are not using SNS at all. Connection should be between S3 and SQS directly. Getting AWS logs from S3 using Filebeat and the Elastic Stack | Elastic Blog shows step by step how to setup. Hopefully this will help!! :crossed_fingers: :crossed_fingers:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.