Multi elasticsearch docker containers per host?

The most simple way to achieve discoverability between ES containers on the same host is to run them with --net=host. The example below if executed on an ECS instance. The image is vanilla elasticsearch.

Starting the first node:

$ docker run --net=host 088c21e58368 elasticsearch -Des.cluster.name=fifth-sense -Des.network.host=0.0.0.0 -Des.node.name=first-node

[first-node] version[2.4.1], pid[1], build[c67dc32/2016-09-27T18:57:55Z]
[first-node] initializing ...
[first-node] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[first-node] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvda1)]], net usable_space [6.8gb], net total_space [7.7gb], spins? [possibly], types [ext4]                                                [first-node] heap size [1007.3mb], compressed ordinary object pointers [true]
[first-node] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[first-node] initialized
[first-node] starting ...
[first-node] publish_address {10.0.0.238:9301}, bound_addresses {[::]:9301}
[first-node] fifth-sense/tVGMuDlaTkGf6FjU3ecdug
[first-node] new_master {first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301}, reason: zen-disco-join(elected_as_master, [0] joins received)
[first-node] publish_address {10.0.0.238:9201}, bound_addresses {[::]:9201}
[first-node] started
[first-node] recovered [0] indices into cluster_state
[first-node] added {{first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302},}, reason: zen-disco-join(join from node[{first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302}])
[first-node] removed {{first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302},}, reason: zen-disco-node-left({first-node}{jOTyiB46T1KRuWteeJRi9w}{10.0.0.238}{10.0.0.238:9302}), reason(left)
[first-node] added {{second-node}{3sT3Zbi4RYy54ijuXQ6McQ}{10.0.0.238}{10.0.0.238:9302},}, reason: zen-disco-join(join from node[{second-node}{3sT3Zbi4RYy54ijuXQ6McQ}{10.0.0.238}{10.0.0.238:9302}])

Starting the second node:

$ docker run --net=host 088c21e58368 elasticsearch -Des.cluster.name=fifth-sense -Des.network.host=0.0.0.0 -Des.node.name=second-node

[second-node] version[2.4.1], pid[1], build[c67dc32/2016-09-27T18:57:55Z]
[second-node] initializing ...
[second-node] modules [reindex, lang-expression, lang-groovy], plugins [], sites []                                                                                                                                         [second-node] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvda1)]], net usable_space [6.8gb], net total_space [7.7gb], spins? [possibly], types [ext4]
[second-node] heap size [1007.3mb], compressed ordinary object pointers [true]
[second-node] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[second-node] initialized
[second-node] starting ...
[second-node] publish_address {10.0.0.238:9302}, bound_addresses {[::]:9302}
[second-node] fifth-sense/3sT3Zbi4RYy54ijuXQ6McQ
[second-node] detected_master {first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301}, added {{first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301},}, reason: zen-disco-receive(from master [{first-node}{tVGMuDlaTkGf6FjU3ecdug}{10.0.0.238}{10.0.0.238:9301}])
[second-node] publish_address {10.0.0.238:9202}, bound_addresses {[::]:9202}
[second-node] started

Check the cluster status by querying any node which is part of it:

$ curl -XGET 'http://localhost:9201/_cluster/health?pretty=true'
{
  "cluster_name" : "fifth-sense",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Ports are automatically incremented when new instances are run.