SQSSNSS3 plugin error queue not valid for endpoint

@andrewkroh do you see anything in the logs above?

Its saying the the queueURL is not in the correct format but I am not familiar with that particulars.

It is in amazon Gov Cloud I believe is that an issue?

You can see the config above...

- type: aws-s3
  queue_url: https://sqs.us-gov-east-redacted.
  role_arn: arn:aws-us-gov:iam::redacted
  expand_event_list_from_field: Records
  bucket_list_interval: 300s
  file_selectors:
    - regex: '/CloudTrail/'
    - regex: '/CloudTrail-Digest/'
    - regex: '/CloudTrail-Insight/'

Is that a valid gov-cloud URL? I thought the domains were amazonaws.com not amazon.com.

If it is infact the correct domain then set endpoint: amazon.com.

the SQS Queue is valid, I copied it from the AWS console.
Can you please tell me where to set the endpoint: amazon.com is that in the filebeat input
Can you confirm if this Fix hardcoded region is resolved? I am on Filebeat 7.14

Hi @afoster

Yes that fix is int the codebase for 7.14 it was fixed in 7.12.1 see here

Also I found this discuss and this post by the person that found and opened that issues states it should not affect Gov Cloud see here

According to this

So perhaps still just a config issue?
So seeing this format is expected

https://sqs.{REGION_ENDPOINT}.{ENDPOINT}/{ACCOUNT_NUMBER}/{QUEUE_NAME}

Is the config in this format?

- type: aws-s3
  queue_url: sqs.us-gov-east-1.amazonaws.com/{ACCOUNT_NUMBER}/{QUEUE_NAME}

I have actually confirmed it should have been amazonaws. I modified the config for the correct SQS Queue URL. However I am still getting invalid token errors.

 The security token included in the request is invalid
        status code: 403, request id: ea537a51-790c-4079-b9c2-7c6f180d2514      {"id": "F887DBA4DA00DEE3", "queue_url": "https://sqs.us-gov-east-1.amazonaws.com/redacted", "region": "us-gov-east-1"}
2021-09-10T20:26:19.465Z        ERROR   [input.aws-s3]  awss3/collector.go:106  SQS ReceiveMessageRequest failed: InvalidClientTokenId: The security token included in the request is invalid
        status code: 403, request id: 1c06c489-6500-45d7-9fa3-0a0e4ccfc4d5      {"id": "F887DBA4DA00DEE3", "queue_url": "https://sqs.us-gov-east-1.amazonaws.com/redacted", "region": "us-gov-east-1"}
2021-09-10T20:26:19.478Z        ERROR   [input.aws-s3]  awss3/collector.go:106  SQS ReceiveMessageRequest failed: InvalidClientTokenId: The security token included in the request is invalid
        status code: 403, request id: e54a0a86-57ef-4421-b471-f105eb45b37d      {"id": "F887DBA4DA00DEE3", "queue_url": "https://sqs.us-gov-east-1.amazonaws.com/redacted", "region": "us-gov-east-1"}


It is now. Thank you, Stephen. I incorrectly missed the aws in editing. I have corrected that to match your url and am seeing errors.

Ok good..

I'm not as familiar with the authentication portion of but would definitely take a close look at the docs here

I think you are close...just need to get the creds right.

The aws module requires AWS credentials configuration in order to make AWS API calls. Users can either use access_key_id, secret_access_key and/or session_token, or use role_arn AWS IAM role, or use shared AWS credentials file.

Please see AWS credentials options for more details.

Thank you Stephen. I will look into this. Appreciate your assistance.

Hello, I resolved the token and the AWS input errors remain. any ideas is this the correct input?

etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path>
ERROR        [input.aws-s3]        awss3/collector.go:106      >
Sep 15 19:42:27 ip-redacted filebeat[449]:         status code: 403, request id: 7bef1c17-c967-4875-b1cd-df9e89cd14a1        {"id": "8448D>
Sep 15 19:42:27 ip-redacted filebeat[449]: 2021-09-15T19:42:27.061Z        ERROR        [input.aws-s3]        awss3/collector.go:106      >

I also have noted that /var/log/syslog and /usr have filled up. I am not sure the reason they are full.

Those error lines are incomplete / truncated so hard to tell can you provide a fuller error dump like you did above?

but 403, It is still an auth issue..

What does your aws-s3 input look like now?
please show as complete as possible and just anonymize the creds / queue name.

Also is this Fips Enabled see here

Also can you look at the SQS console and see any auth errors on that side.

Thank you Stephen. I will look into fips. I am not using aws module, and am using only filebeat keystore for credentials.

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
- type: aws-s3
  queue_url: https://sqs.us-gov-east-1.amazonaws.com/redacted
  role_arn: arn:aws-us-gov:iam::redacted
  expand_event_list_from_field: Records
  bucket_list_interval: 300s
  file_selectors:
    - regex: '/CloudTrail/'
    - regex: '/CloudTrail-Digest/'
    - regex: '/CloudTrail-Insight/'
  # Change to true to enable this input configuration.
  enabled: true

# filestream is an input for collecting log messages from files. It is going to replace log input in the future.
- type: filestream
  enabled: false

# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
  output.logstash:
    hosts: ["localhost:5044"]
  output.elasticsearch.password: "$[ES_PWD}"
type or paste code here

If you put the credentials directly into the filebeat.yml does it work?

Not sure What the reference to the module is. I understand you're using the aws-s3 input.

okay, just wanted to add for clarity what module I am using.
great suggestion, Stephen. Ill do that straightaway and let you know the outcome.

And when I'm speaking of credentials I'm speaking of the AWS S3 credentials such as
access_key_id, secret_access_key

yes. understand. thank you

Also this is incorrect there is no reason to include the Elasticsearch password when you are sending The output to logstash.

ah. the documentation led me to believe this was required for the filebeat keystore.
Apologies.

Plus if you use the keystore you still need to put the fields in the yml example below, it does not automatically put them in the filebeat.yml but I would try first with them directly in the filebeat.yml

Adding the keys to the keystore (it will prompt for the value)

filebeat keystore add AWS_ACCESS_KEY_ID

filebeat keystore add AWS_SECRET_ACCESS_KEY

then in the filebeat.yml

- type: aws-s3
  queue_url: https://sqs.us-gov-east-1.amazonaws.com/redacted
  role_arn: arn:aws-us-gov:iam::redacted
  expand_event_list_from_field: Records
  bucket_list_interval: 300s
  access_key_id: ${AWS_ACCESS_KEY_ID}
  secret_access_key: ${AWS_SECRET_ACCESS_KEY}

  file_selectors:
    - regex: '/CloudTrail/'
    - regex: '/CloudTrail-Digest/'
    - regex: '/CloudTrail-Insight/'

  # Change to true to enable this input configuration.
  enabled: true

thank you very much Stephen. much appreciated.