I'm running ECK 1.4.0 on AWS EKS. Elasticsearch and Kibana version is 7.11.2.
First off, awesome how it's all shaping up! Also including innovations around Fleet and the Elastic Agent. Really exciting stuff!!
A next step for my setup is the AWS integration setup. But I am somewhat lost in how to proceed.
Reading though the bits listed in Kibana and what I've found so far (also searching here)... the details to get this setup are not clear to me. For example, does an agent need to be deployed to an EC2 instance that has the needed IAM role assigned to it?
Also having examples of working integration settings and relevant yaml configurations would be really good.
Would be great if someone could share there install notes, and I'd love to help out getting this integration (and others) evolve. I'm open to helping out further documenting the process if that adds value
Hello @wmeensvwpfs, you don't need to install an agent onto the actual EC2 instance. You can setup AWS credentials for the agent to use, install the AWS package into elastic agent and then apply the credentials. Here is a link to how to setup different kind of credentials for AWS for Metricbeat. Unfortunately if you want to setup AWS integration with elastic-agent running on docker, IAM role can't be used. Because right now it requires a volume mount for the shared credential file.
I was first looking in the ECK documentation and saw the bits on the Elastic Agent and it still being in it's early stages. So decided to not pursue that. Hence my thought of needing to use an EC2 instance as harvest system.
The use case I have is somewhat more complex (I think).
The plan is to pull in CloudWatch logs and the sort from multiple AWS accounts, and place it all in one Elasticsearch cluster - the one I have provisioned with ECK.
For compliance reasons I cannot make use of an AWS user access key and token - the valid way would be using an IAM role.
Hence I am thinking an EC2 instance would need to be deployed to each AWS account that can pull the relevant log and metric info. The EC2 instance in this case would have the correct IAM profile attached to be able to read from those sources (e.g. no user key id and token needed).
As I currently only have experience with the 'classic' ELK/Elastic stack, it's unclear to me what Elastic agent configurations should look like.
Would be super good to see some configuration snippets that work with IAM roles only as means to authenticate.
Thanks for all the information! For elastic agent, configuration is defined in a policy. For example: when you add aws integration, you can choose role ARN instead of credential profile name in settings:
Awesome! I had some time this weekend to try to get this running. I had some hoops to jump through as ECK has been setup with default self signed certificates (and some configuration stuff having to do with the NLB I have fronting EKS). but, I have the first AWS Data streams coming into Kibana!
No credentails needed other than the Fleet enrollment token and a IAM role containing the needed permissions that has been connected to the EC2 instance running the fleet agent.
Also, I updated to 7.12 and am happy to see the AWS Fleet integration has jumped from Experimental to Beta!
thumbs up
This was just a rough test setup, will be working this out further.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.