Multi elasticsearch docker containers per host?

Hi,

I want to set up at least 2 elasticsearch docker containers per host so that I have a 4 node cluster on 2 servers on EC2.

My idea would be on a per host basis:

  • Container ES1 to run with port eth0:9200/eth0:9300
  • Container ES2 to run with port eth0:9201/eth0:9301

I'm trying to play with many host/port options to make it work and so far I don't have a cluster with 2 nodes.

Any help appreciated:

kibana:
    build: ./build-kibana/
    links:
        - es
    ports:
        - "5601:5601"
    environment:
        - ELASTICSEARCH_URL=http://es:9200
es:
    build: ./build-es/
    command: elasticsearch -Des.network.publish_host=0.0.0.0 -Des.http.port=9200 -Des.http.publish_port=9200 -Des.transport.host=0.0.0.0 -Des.transport.tcp.port=9300 -Des.transport.publish_port=9300 -Des.plugin.mandatory=cloud-aws -Des.cloud.aws.access_key=XXX  -Des.cloud.aws.secret_key=XXX -Des.cloud.aws.ec2.protocol=http  -Des.cloud.aws.region=eu-west  -Des.discovery.type=ec2 -Des.discovery.zen.ping.multicast.enabled=false -Des.discovery.ec2.tag.Description=elasticsearch-integration -Des.discovery.ec2.tag.Environnement=int -Des.discovery.zen.minimum_master_nodes=2
    ports:
    - "9200:9200"
    - "9300:9300"
es2:
    build: ./build-es/
    command: elasticsearch -Des.network.publish_host=0.0.0.0 -Des.http.port=9201 -Des.http.publish_port=9201 -Des.transport.tcp.port=9301 -Des.transport.host=0.0.0.0 -Des.transport.publish_port=9301 -Des.plugin.mandatory=cloud-aws -Des.cloud.aws.access_key=XXXX  -Des.cloud.aws.secret_key=XXXX -Des.cloud.aws.ec2.protocol=http  -Des.cloud.aws.region=eu-west  -Des.discovery.type=ec2 -Des.discovery.zen.ping.multicast.enabled=false -Des.discovery.ec2.tag.Description=elasticsearch-integration -Des.discovery.ec2.tag.Environnement=int -Des.discovery.zen.minimum_master_nodes=2
    ports:
    - "9201:9201"
    - "9301:9301"

I can connect to each node with curl http://localhost:9200/ or curl http://localhost:9201/ but nodes don't see each others.

My kibana image is just kibana 4.5 with marvel, sense plugins
My ES image is just elasticsearch:2 with marvel-agent, license, cloud-aws and head plugins + Expose port 9201 & 9301.

In logs, I can see things such as on both nodes:

es_1     | [2016-05-13 10:19:30,895][WARN ][discovery.zen.ping.unicast] [Mahkizmo] failed to send ping to [{#zen_unicast_2#}{::1}{[::1]:9300}]
es_1     | SendRequestTransportException[[][[::1]:9300][internal:discovery/zen/unicast]]; nested: NodeNotConnectedException[[][[::1]:9300] Node not connected];

or:
 
es2_1    | Caused by: NodeNotConnectedException[[][[::1]:9301] Node not connected]
es2_1    |      at org.elasticsearch.transport.netty.NettyTransport.nodeChannel(NettyTransport.java:1132)
es2_1    |      at org.elasticsearch.transport.netty.NettyTransport.sendRequest(NettyTransport.java:819)
es2_1    |      at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:329)
es2_1    |      ... 12 more
es2_1    | [2016-05-13 10:21:21,619][WARN ][discovery.zen.ping.unicast] [Gideon] failed to send ping to [{#zen_unicast_1#}{127.0.0.1}{127.0.0.1:9301}]
es2_1    | SendRequestTransportException[[][127.0.0.1:9301][internal:discovery/zen/unicast]]; nested: NodeNotConnectedException[[][127.0.0.1:9301] Node not connected];

I don't see why it tries to use localhost

Thanks for your help,
Nicolas

I ended up without docker so far. I think that multi instance on multiple servers is only feasible with swarm.

Due to lack of time, I'll postpone the docker integration.

The most simple way to achieve discoverability between ES containers on the same host is to run them with --net=host. The example below if executed on an ECS instance. The image is vanilla elasticsearch.

Starting the first node:

$ docker run --net=host 088c21e58368 elasticsearch -Des.cluster.name=fifth-sense -Des.network.host=0.0.0.0 -Des.node.name=first-node

[first-node] version[2.4.1], pid[1], build[c67dc32/2016-09-27T18:57:55Z]
[first-node] initializing ...
[first-node] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[first-node] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvda1)]], net usable_space [6.8gb], net total_space [7.7gb], spins? [possibly], types [ext4]                                                [first-node] heap size [1007.3mb], compressed ordinary object pointers [true]
[first-node] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[first-node] initialized
[first-node] starting ...
[first-node] publish_address {10.0.0.238:9301}, bound_addresses {[::]:9301}
[first-node] fifth-sense/tVGMuDlaTkGf6FjU3ecdug
[first-node] new_master {first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301}, reason: zen-disco-join(elected_as_master, [0] joins received)
[first-node] publish_address {10.0.0.238:9201}, bound_addresses {[::]:9201}
[first-node] started
[first-node] recovered [0] indices into cluster_state
[first-node] added {{first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302},}, reason: zen-disco-join(join from node[{first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302}])
[first-node] removed {{first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302},}, reason: zen-disco-node-left({first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302}), reason(left)
[first-node] added {{second-node}{3sT3Zbi4RYy54ijuXQ6McQ}{10.0.0.238}{10.0.0.238:9302},}, reason: zen-disco-join(join from node[{second-node}{3sT3Zbi4RYy54ijuXQ6McQ}{10.0.0.238}{10.0.0.238:9302}])

Starting the second node:

$ docker run --net=host 088c21e58368 elasticsearch -Des.cluster.name=fifth-sense -Des.network.host=0.0.0.0 -Des.node.name=second-node

[second-node] version[2.4.1], pid[1], build[c67dc32/2016-09-27T18:57:55Z]
[second-node] initializing ...
[second-node] modules [reindex, lang-expression, lang-groovy], plugins [], sites []                                                                                                                                         [second-node] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvda1)]], net usable_space [6.8gb], net total_space [7.7gb], spins? [possibly], types [ext4]
[second-node] heap size [1007.3mb], compressed ordinary object pointers [true]
[second-node] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[second-node] initialized
[second-node] starting ...
[second-node] publish_address {10.0.0.238:9302}, bound_addresses {[::]:9302}
[second-node] fifth-sense/3sT3Zbi4RYy54ijuXQ6McQ
[second-node] detected_master {first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301}, added {{first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301},}, reason: zen-disco-receive(from master [{first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301}])
[second-node] publish_address {10.0.0.238:9202}, bound_addresses {[::]:9202}
[second-node] started

Check the cluster status by querying any node which is part of it:

$ curl -XGET 'http://localhost:9201/_cluster/health?pretty=true'
{
  "cluster_name" : "fifth-sense",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Ports are automatically incremented when new instances are run.