@Luca_Belluccini
The IP address that gets assigned to beat name is confusing as to what it really is. I can't seem to find any documentation as to what this ip is as it has absolutely nothing to do with our VPC as far as our IP's go.
2020-04-22T23:03:15.349Z INFO [publisher] pipeline/module.go:110 Beat name: 169.x.x.x
I have created flow logs that highlight inbound/outbound traffic on our VPC, filtered it down for two IP's here. One IP I believe to be my lambda and the other is the IP to my EC2 instance. As you can see nothing is getting blocked...
2 097135049942 eni-551xxxxxvpc 172.xx.xx.lambda 172.xx.xx.ec2 44405 9200 6 1 60 1587596578 1587596600 ACCEPT OK
2 097135049942 eni-551xxxxxvpc 172.xx.xx.lambda 172.xx.xx.ec2 7797 9200 6 1 60 1587596578 1587596600 ACCEPT OK
2 097135049942 eni-551xxxxxvpc 172.xx.xx.lambda 172.xx.xx.ec2 46350 9200 6 1 60 1587596578 1587596600 ACCEPT OK
2 097135049942 eni-551xxxxxvpc 172.xx.xx.lambda 172.xx.xx.ec2 3443 9200 6 1 60 1587596578 1587596600 ACCEPT OK
I also have setup a basic license...
[ec2-user@ip-172-xx-xx-xx ~]$ curl -XGET 'http://172.xx.xx.xx:9200/_license?pretty'
{
"license" : {
"status" : "active",
"uid" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"type" : "basic",
"issue_date" : "2020-04-22T18:20:18.697Z",
"issue_date_in_millis" : 1587579618697,
"max_nodes" : 1000,
"issued_to" : "elasticsearch",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}
Error "no route to host"
I'm receiving some new errors now that indicate "no route to host". This IP is literally the IP for the EC2. I don't understand why the above logs indicated zero blocks on inbound/outbound traffic for our VPC. All three configs (elasticsearch.yml, kibana.yml, and functionbeats.yml) are configured for my ec2 ip.
2020-04-23T00:36:12.165Z INFO pipeline/output.go:95 Connecting to backoff(elasticsearch(http://172.xx.xx.ec2:9200))
2020-04-23T00:36:12.165Z DEBUG [elasticsearch] elasticsearch/client.go:733 ES Ping(url=http://172.xx.xx.ec2:9200)
2020-04-23T00:36:12.186Z DEBUG [elasticsearch] elasticsearch/client.go:737 Ping request failed with: Get http://172.xx.xx.ec2:9200: dial tcp 172.xx.xx.ec2:9200: connect: no route to host
2020-04-23T00:36:13.692Z ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://172.xx.xx.ec2:9200)): Get http://172.xx.xx.ec2:9200: dial tcp 172.xx.xx.ec2:9200: connect: no route to host
2020-04-23T00:36:13.692Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://172.xx.xx.ec2:9200)) with 1 reconnect attempt(s)
These logs for the functionbeat lambda keep generating for no apparent reason. I don't understand this? It's as if it keeps retrying and retrying.
Error "failing to receive disposition for aws"
Secondly the functionbeat lambda logs in cloudwatch also indicate the following error that says it fails to collect add_cloud_metadata for provider aws.
2020-04-23T00:36:17.356Z DEBUG [filters] add_cloud_metadata/providers.go:162 add_cloud_metadata: received disposition for aws after 1.011768438s. result=[provider:aws, error=failed requesting aws metadata: Get http://169.254.169.254/2014-02-25/dynamic/instance-identity/document: dial tcp 169.254.169.254:80: connect: connection refused, metadata=
{}
]
However, when i deployed my functionbeat lambda via ./functionbeat -v -e -d "*" deploy functionbeat-lambda
it says the complete opposite indicating that it succeeded in detecting a hosting provider...
2020-04-23T00:32:05.813Z INFO add_cloud_metadata/add_cloud_metadata.go:93 add_cloud_metadata: hosting provider type detected as aws, metadata={"account":{"id":"xxxxxxxxxxxxx"},"availability_zone":"us-gov-xxxxx","image":{"id":"ami-6efdxxxx"},"instance":{"id":"i-0c7616cxxxxxx"},"machine":{"type":"t2.medium"},"provider":"aws","region":"us-gov-xxxx"}