We'd prefer to mount an AWS credentials file into our docker containers at /root/.aws/credentials versus specifing AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as env variables.
I'd assume that the sdk would still pick this up, however filebeat config validation fails with
2019-06-25T22:24:06.064Z ERROR instance/beat.go:877 Exiting: empty field accessing '0.access_key_id' (source:'/usr/share/metricbeat/modules.d/aws.yml')
Exiting: empty field accessing '0.access_key_id' (source:'/usr/share/metricbeat/modules.d/aws.yml')
Is this a reasonable ask? I wanted to run it by the group here before opening a ticket on github.
Yes definetely, if the module doesn’t support the normal credential provider chain that all aws SDK use and that all AWS user expect... I would be tempted to say thats a bug because it is so ingrained in the AWS ecosystem.
By supporting the normal SDK behavior you get support for on prem, on prem with AWS SSM installed, EC2 instances with instance role and ECS task role.
Forcing one to supply creds when metricbeat is running inside an ec2 instance with a role for example... is weird.
I have to guess they already have that knowledge in GO at least because of Functionbeat which runs in a lambda?
They also do it in the EC2 discovery plugin and the S3 snapshot repository plugin, although those are Java and from different internal teams at Elastic I would assume.
Short if there is not already an issue, do open one and link here.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.