OK, no problem.
My architecture is two single-node clusters deployed on AWS EC2 instances in the same VPC. I have confirmed the underlying security groups / networking connectivity between the nodes is good on the relevant ports.
I wanted to just initially prove out the cross-cluster search capability where I can reference remote indexes from one of my clusters (so cluster_1 can reach into cluster_2) with this simple architecture before proceeding with larger, multi-node clusters.
With this in mind, I am just using singular V7.10.0 container deployments on the 2-node setup outlined above. So, I have Elasticsearch/Kibana container deployments on a Docker network using EBS-backed Docker volumes for data storage on each node. These Elasticsearch clusters are being fed by simple metricbeat deployments just to get some data to work with.
All-in-all, this involved the general commands:
* docker pull docker.elastic.co/kibana/kibana:7.10.0
* docker pull docker.elastic.co/elasticsearch/elasticsearch:7.10.0
* docker network create elastic
* docker volume create elasticsearch
* docker volume create kibana
* docker run -d --name elasticsearch --network elastic -p 9200:9200 -p 9300:9300 --env "discovery.type=single-node" --env "ES_JAVA_OPTS=-Xms512m -Xmx512m" --env "bootstrap.memory_lock=true" --env "ELASTIC_PASSWORD=<cluster_pw>" --env "xpack.security.enabled=true" --mount source=elasticsearch,target=/usr/share/elasticsearch/data docker.elastic.co/elasticsearch/elasticsearch:7.10.0
* docker run -d --name kibana --net elastic -p 5601:5601 --env "ELASTICSEARCH_HOSTS=http://elasticsearch:9200" --env "ELASTICSEARCH_USERNAME=elastic" --env "ELASTICSEARCH_PASSWORD=<cluster_pw>" --mount source=kibana,target=/usr/share/kibana/config docker.elastic.co/kibana/kibana:7.10.0
With these general configurations running, I have successfully logged into Kibana with the global user on both clusters (both using slightly different temporary passwords) and seen the separate metricbeat indexes on each cluster. This is the point where I am stuck.
First, I used the DevTools on cluster_1 to create the following remote cluster:
* PUT /_cluster/settings {"persistent":{"cluster":{"remote":{"cluster_2":{ "seeds":["<cluster_2_ip>:9300"]}}}}}
Then, on cluster_2 using the DevTools to create the appropriate remote-search role for cluster_1:
* POST /_security/role/remote-search {"indices": [{"names": ["target-indices"],"privileges": ["read","read_cross_cluster"]}]}
Back on cluster_1 using the DevTools, I create the dummy remote-search role:
* POST /_security/role/remote-search {}
Then still on cluster_1, I created the appropriate user with the aforementioned role:
* POST /_security/user/cross-search-user {"password":"<temp_pw>","roles":["remote-search"]}
Finally after all the configuration, I thought the following command would work and I could ultimately use the cluster_2 namespace to reference the remote index:
* GET /_remote/info
But this had nothing ... can anyone spot the issue? Any help is greatly appreciated, I am really struggling with this