Functionbeat deploy fails, citing issue with s3 bucket

I'm trying to use functionbeat for my first time, with a single function, of type kinesis, using functionbeat v6.8.13.

When I run 'functionbeat deploy kinesis' (where 'kinesis' is the name i chose for the function name), I get an error "'enterprise-elk-dev-s3bucket' already exist and you don't have permission to access it
Fail to deploy 1 function(s)"

However, I can list, put, and rm objects in the bucket named above using the 'aws s3' cli. Any ideas what is wrong?

Here's the contents of my functionbeat.yml:

functionbeat.provider.aws.deploy_bucket: "enterprise-elk-dev-s3bucket"
functionbeat.provider.aws.functions:

  • name: kinesis
    enabled: true
    type: kinesis
    description: "lambda function for Kinesis events"
    triggers:
    • event_source_arn: arn:aws:kinesis:us-east-1:xxxxxxxxxxxx:stream/kinesis-stream-for-elk
      name: lambda2kinesis2es
      setup.kibana:
      output.elasticsearch:
      hosts: ["https://elkdev-ingest.nml.com:443"]
      username: "elastic"
      password: "${ES_PWD}"
      ssl:
      verification_mode: "none"
      processors:
  • add_host_metadata: ~
  • add_cloud_metadata: ~

And here is the command I tried, it's output, and some more commands that show that the computer from which I ran the command, has permission to put files in the bucket:

./functionbeat -e deploy kinesis 2021-01-11T20:40:01.053Z INFO instance/beat.go:611 Home path: [/home/azscomp/functionbeat-6.8.13-linux-x86_64] Config path: [/home/azscomp/functionbeat-6.8.13-linux-x86_64] Data path: [/home/azscomp/functionbeat-6.8.13-linux-x86_64/data] Logs path: [/home/azscomp/functionbeat-6.8.13-linux-x86_64/logs] 2021-01-11T20:40:01.053Z INFO instance/beat.go:618 Beat UUID: f331a229-6c10-45b2-82c4-068775f0e363 Function: kinesis, could not deploy, error: bucket 'enterprise-elk-dev-s3bucket' already exist and you don't have permission to access it Fail to deploy 1 function(s) aws s3 ls s3://enterprise-elk-dev-s3bucket
/usr/lib/python2.7/site-packages/awscli/customizations/cloudfront.py:17: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
from cryptography.hazmat.primitives import serialization, hashes
aws s3 cp package.zip s3://enterprise-elk-dev-s3bucket/package.zip /usr/lib/python2.7/site-packages/awscli/customizations/cloudfront.py:17: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release. from cryptography.hazmat.primitives import serialization, hashes upload: ./package.zip to s3://enterprise-elk-dev-s3bucket/package.zip aws s3 ls s3://enterprise-elk-dev-s3bucket
/usr/lib/python2.7/site-packages/awscli/customizations/cloudfront.py:17: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
from cryptography.hazmat.primitives import serialization, hashes
2021-01-11 20:40:30 15263071 package.zip
aws s3 rm s3://enterprise-elk-dev-s3bucket/package.zip /usr/lib/python2.7/site-packages/awscli/customizations/cloudfront.py:17: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release. from cryptography.hazmat.primitives import serialization, hashes delete: s3://enterprise-elk-dev-s3bucket/package.zip

So, any idea why I can't put objects in the s3 bucket? Also, I tried modifying the name of the bucket in the functionbeat.yml to a nonexistent bucket, and then rerunning the functionbeat setup command, and it fails with the same error, which says that the bucket exists, but in this test, the bucket did not exist. So, I don't know that the error output can be trusted.

Hey @Jonathan_Detert,

I was looking at the code that returns this error (here), and it seems to be a catch-all error when trying to check for the existence of the bucket. For all cases, except when the bucket doesn't exist, it returns the same error.
For example, if authentication is failing, Functionbeat is going to return the same error for any bucket name.

Could you review the credentials you are using?

If you have the chance of compiling Functionbeat yourself, you can try to modify this code to report the exact error that is happening there.

I have opened an issue to improve this: https://github.com/elastic/beats/issues/23466

Hi @jsoriano, thanks for looking at this.

I think that the error message should not assert a lie. I think it can say "either it doesn't exist, or you don't have access to it".

I ran the code from an ec2 instance that has assumed a role that grants read/write/list of the s3 bucket. w.r.t. authN/authZ, the documention says that 'you can set environment variables that contain your credentials'. It doesn't say that you must. I did not, because I was relying on the ec2 instance having assumed a role w. the necessary authZ.

Are the env vars the only way? If so, it should be updated to make that clear. Also, that would be a problem for me, as I am prohibited by my employer from obtaining an aws access key.

It seems that the docs have been recently updated, there are more details now: Configure AWS functions | Functionbeat Reference [7.16] | Elastic

It says now:

The aws functions require AWS credentials configuration in order to make AWS API calls. Users can either use AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY and/or AWS_SESSION_TOKEN , or use shared AWS credentials file. Please see AWS credentials options for more details.

So it seems that you need to use environment variables, or a credentials file.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.