Has anyone ever successfully used the Google Cloud discovery plugin successfully? After testing and debugging for a week I can't get it working.
I am using the latest 7.x release (installed on debian-10 following all the setup/configuration instructions)
my elasticsearch.yml file:
path:
data: /var/lib/elasticsearch
logs: /var/log/elasticsearch
cloud:
gce:
project_id: my-project-id
zone: ["us-east1-b", "us-east1-c", "us-east1-d"]
discovery:
seed_providers: gce
gce:
tags: elasticsearch # this tag should be set on the ES instances
node:
name: ${HOSTNAME}
cluster:
name: search
initial_master_nodes:
- search-node-1
network:
host: ["_gce_", "_local_"]
http:
port: 9200
transport:
port: 9300
When I try to create a 3 node cluster (each node has the elasticsearch tag as specified) with one node in each zone (node hostnames search-node-1, search-node-2, search-node-3 all with this same config file, all in separate us-east1 zones matching the configuraiton) it always comes up as 3 separate clusters (each one being its own master). I have verified the networking config (routes and firewall rules) and they are all reachable from each other over 9300
I also have enabled and correctly configured logging, but there are no discovery logs indicating success or error (nothing aside from this log:
[2021-01-12T16:20:49,102][INFO ][o.e.c.g.GceInstancesServiceImpl] [search-node-1] starting GCE discovery service
Sorry, could you be more specific? I did read through and follow those instructions. Are you saying I need to set the discovery.seed_hosts even when using the Google Cloud discovery? If so, what is the point of the discovery?
No, discovery.seed_hosts is almost completely unrelated to bootstrapping: it's only mentioned once on that page, in the bit that says you have to set something to suppress auto-bootstrapping, but you're setting discovery.seed_providers so you're good.
Your bootstrap config has been wrong at some point in the past. Even if it's right now, the original config persists across restarts, so you need to wipe all the nodes and start again.
I've done that several times. I'm trying to build a baked image with packer. In the baking process I start Elasticsearch on the packer VM to do some preliminary configuration (mainly just set the bulit-in user passwords with elasticsearch-setup-passwords). Are you saying that is impossible with Elasticsearch?
Oh I see. It's a pretty unusual way to do these things with a stateful service like Elasticsearch, it probably makes more sense for stateless things. If you really want to do this you'll need to do something like bootstrapping a three-node cluster before the baking process, then bake three separate images for the three nodes.
I don't agree that baked images are only for stateless workloads (and from what I've seen the entire industry agrees with me on that, in fact there are baked ES images offered in most cloud marketplaces). In any case it seems like I can't do any initialization that requires starting the service during the image baking as it will cause problems with the bootstrapping, so I will move the password setting out. Is there a way to set the elastic user password without using elasticsearch-setup-passwords (can I set it via the file realm or some other way?)
AFAIK they're all baked as empty nodes, before bootstrapping, but I should have been more precise that this is what I meant. If you know of any services that are offering images that aren't started afresh then please link them here so we can investigate further - I think that'd be quite a dangerous way to run things.
I don't think so, no, I think setting passwords needs a running cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.