EC2 discovery can't see other node

This topic seems to get posted again and again, and I've gone through all the other answers with no luck.

I'm using version 7.2.0. I'm trying to do a plain-vanilla install on EC2. I have two nodes in the same VPC, same subnet, same security group, built from the same AMI.

On the command line I can do "aws ec2 describe-instances" and see the other instance just fine, which suggests that the IAM user is properly configured.

My elasticsearch.yml:
discovery.seed_providers: ec2
discovery.ec2.groups: my-security-group
network.host: [ site , local , eth0 ]

I've added the aws access key and secret key to the keystore per the instructions here:
https://www.elastic.co/guide/en/elasticsearch/plugins/current/discovery-ec2-usage.html

I'm starting the node using systemd. It starts up just fine.
The node starts up just fine and contains this message:
using discovery type [zen] and seed hosts providers [settings, ec2]

Later, it says "elected-as-master", and "nodes joined" includes only itself.

What am I missing? How do I debug this?

Can you share the exact log messages from both nodes, from startup until the elected-as-master message?

Thanks for the response. I'm giving up. Way too many moving parts and configuration requirements for AWS. I'm now trying to do discovery with hardcoded IP addresses, and that isn't working either. I'll start another thread for that one.

Noted. It's probably going to be helpful to see a similar set of logs for your new situation in your new thread too.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.