Cross Cluster Search using Docker-based Elasticsearch

Hey folks,

I am setting up a test cluster to explore how cross-cluster search works. I have one physical Elasticsearch 5.4.0 cluster consisting of four machines named elastic1a through elastic1d (call it physical).

I just finished setting up a small Docker cluster consisting of three Elasticsearch 5.4.0 containers, a Kibana container and a Logstash container. Call this cluster docker. Only one of these Elasticsearch containers binds port 9200 and 9300 on the host machine "docker_host" ethernet interface. The other two Elasticsearch containers are set to use only the private Docker network. (Nice work on these Docker images, by the way.)

Everything is working fine except the cross cluster search. I used these settings on the physical cluster:

PUT _cluster/settings
{
  "persistent": {
    "search": {
      "remote": {
        "physical": {
          "seeds": [
            "elastic1a:9300"
          ]
        },
        "docker": {
          "seeds": [
            "docker_host:9300"
          ]
        }
      }
    }
  }
}

I can successfully do a remote cluster search on the "physical" cluster from the Kibana console on that cluster.

I infer from the error message when I try to issue a query against the "docker" cluster that I have to expose the transport ports for these containers on the docker host. Here's the message:

{
  "error": {
    "root_cause": [
      {
        "type": "connect_transport_exception",
        "reason": "[F1jb6SM][172.18.0.2:9300] connect_timeout[30s]"
      }
    ],
    "type": "transport_exception",
    "reason": "unable to communicate with remote cluster [docker]",
    "caused_by": {
      "type": "connect_transport_exception",
      "reason": "[F1jb6SM][172.18.0.2:9300] connect_timeout[30s]",
      "caused_by": {
        "type": "annotated_no_route_to_host_exception",
        "reason": "No route to host: 172.18.0.2/172.18.0.2:9300",
        "caused_by": {
          "type": "no_route_to_host_exception",
          "reason": "No route to host"
        }
      }
    }
  },
  "status": 500
}

The Docker logs indicate that the 172.18.0/24 network is the internal one used between the Docker containers, so clearly one of these addresses is being sent back to the physical cluster during the query and it can't talk to it because it's on the private subnet.

I presume I can solve this by setting the docker-compose.yml to expose the transport ports for each node, say, 9300, 9301, and 9302, but I wondered if there were any settings available that would permit me to force all cross-cluster searches to communicate only through the first container and port 9300.

If not, what is the minimal set of parameters I need to change on each container to enable all of them to participate in cross-cluster search?

Thanks in advance for your advice!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.