Elasticsearch "failed to send ping" error

I am trying to create an Elasticsearch cluster in AWS (without success).
Currently on two nodes I have the yml files below.

When I run "curl status" on the second node ( where master is false and data is true) the message "master_not_discovered_exception"
is returned.

In the log there are repeated messages
"failed to send ping to [{#zen_unicast_1#}{127.0.0.1}{127.0.0.1:9300}]
**SendRequestTransportException[[][127.0.0.1:9300][internal:discovery/zen/unicast]]; nested: **
NodeNotConnectedException[[][127.0.0.1:9300] Node not connected];"

In addition, when I run "curl cluster health" on the first node (the one where master is true) the display returned shows the node total as 1.

How do I get these nodes to "see" each other?

Notes: elasticsearch 2.3.3 is installed on both nodes.
the option "cloud.aws.region" has no effect

(Apology is these are somewhat basic questions. I am new to both AWS and Elasticsearch. However, I have not been
able to find an online resolution.)

Thank you.

cluster.name: dev-lambda-elastic
node.name: dev-lambda-elastic-1
node.master: true
node.data: false
bootstrap.mlockall: true
network host: _ec2_
network.publish_host: _ec2_
http.port: 9200
discovery.type: ec2
discovery.zen.minimum_master_nodes: 1
cloud.aws.access_key: XXXXXXXXXXXXXXXXXXXXXXX
cloud.aws.secret_key: XXXXXXXXXXXXXXXXXXXXXXX
#cloud.aws.region: "us-east"
cluster.name: dev-lambda-elastic
node.name: dev-lambda-elastic-2
node.master: false
node.data: true
bootstrap.mlockall: true
network host: _ec2_
network.publish_host: _ec2_
http.port: 9200
discovery.type: ec2
discovery.ec2.ping_timeout: "30s"
discovery.zen.minimum_master_nodes: 1
cloud.aws.access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXX
cloud.aws.secret_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXX
#cloud.aws.region: "us-east"

Please format your post so settings are more readable.

Could you provide the full logs of both instances (and format)?

I also separately emailed this log with some additional information.
This is the log for node 2 where master is false and data is true

Please copy logs as text. Don't use screenshots for logs.

I edited your question so it's more readable.

Can you try to remove network.publish_host setting on both nodes?
Restart both nodes and copy the 2 logs here. Format with </> icon please.

I apologize - I have been trying to wrap the logs with html but nothing seems to render correctly on the preview screen.
Log files when network.publish_host is revamped from yml

(Sending in two pieces - size limitation of message)

Following is the log for the node where master is FALSE and data is True

[2016-07-11 09:58:28,365][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed [2016-07-11 09:58:28,613][INFO ][node ] [dev-lambda-elastic-2] version[2.3.3], pid[7914], build[218bdf1/2016-05-17T15:40:04Z] [2016-07-11 09:58:28,613][INFO ][node ] [dev-lambda-elastic-2] initializing ... [2016-07-11 09:58:29,326][INFO ][plugins ] [dev-lambda-elastic-2] modules [reindex, lang-expression, lang-groovy], plugins [head, cloud-aws], sites [head] [2016-07-11 09:58:29,350][INFO ][env ] [dev-lambda-elastic-2] using [1] data paths, mounts [[/var (/dev/mapper/vg00-var)]], net usable_space [8.9gb], net total_space [9.7gb], spins? [no], types [ext4] [2016-07-11 09:58:29,350][INFO ][env ] [dev-lambda-elastic-2] heap size [989.8mb], compressed ordinary object pointers [true] [2016-07-11 09:58:29,351][WARN ][env ] [dev-lambda-elastic-2] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536] [2016-07-11 09:58:30,930][INFO ][node ] [dev-lambda-elastic-2] initialized [2016-07-11 09:58:30,930][INFO ][node ] [dev-lambda-elastic-2] starting ... [2016-07-11 09:58:30,988][INFO ][transport ] [dev-lambda-elastic-2] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2016-07-11 09:58:30,992][INFO ][discovery ] [dev-lambda-elastic-2] dev-lambda-elastic/mJm17dThQoekzUjwUfZicw [2016-07-11 09:59:00,994][WARN ][discovery ] [dev-lambda-elastic-2] waited for 30s and no initial state was set by the discovery [2016-07-11 09:59:01,009][INFO ][http ] [dev-lambda-elastic-2] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200} [2016-07-11 09:59:01,009][INFO ][node ] [dev-lambda-elastic-2] started [2016-07-11 09:59:51,100][DEBUG][action.admin.indices.get ] [dev-lambda-elastic-2] no known master node, scheduling a retry [2016-07-11 10:00:01,757][WARN ][discovery.zen.ping.unicast] [dev-lambda-elastic-2] failed to send ping to [{#zen_unicast_6#}{::1}{[::1]:9300}] SendRequestTransportException[[][[::1]:9300][internal:discovery/zen/unicast]]; nested: NodeNotConnectedException[[][[::1]:9300] Node not connected]; at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:340) at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPingRequestToNode(UnicastZenPing.java:440) at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.access$1000(UnicastZenPing.java:83) at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$3.run(UnicastZenPing.java:403) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: NodeNotConnectedException[[][[::1]:9300] Node not connected] at org.elasticsearch.transport.netty.NettyTransport.nodeChannel(NettyTransport.java:1132) at org.elasticsearch.transport.netty.NettyTransport.sendRequest(NettyTransport.java:819) at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:329) ... 6 more [2016-07-11 10:00:21,102][DEBUG][action.admin.indices.get ] [dev-lambda-elastic-2] timed out while retrying [indices:admin/get] after failure (timeout [30s]) [2016-07-11 10:00:21,109][WARN ][rest.suppressed ] /status Params: {index=status} MasterNotDiscoveredException[null] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:226) at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:236) at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:804) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

Following is log for node where master is TRUE and data is FALSE

[2016-07-11 09:57:31,000][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed [2016-07-11 09:57:31,254][INFO ][node ] [dev-lambda-elastic-1] version[2.3.3], pid[2233], build[218bdf1/2016-05-17T15:40:04Z] [2016-07-11 09:57:31,254][INFO ][node ] [dev-lambda-elastic-1] initializing ... [2016-07-11 09:57:31,899][INFO ][plugins ] [dev-lambda-elastic-1] modules [reindex, lang-expression, lang-groovy], plugins [cloud-aws], sites [] [2016-07-11 09:57:31,918][INFO ][env ] [dev-lambda-elastic-1] using [1] data paths, mounts [[/var (/dev/mapper/vg00-var)]], net usable_space [8.9gb], net total_space [9.7gb], spins? [no], types [ext4] [2016-07-11 09:57:31,918][INFO ][env ] [dev-lambda-elastic-1] heap size [989.8mb], compressed ordinary object pointers [true] [2016-07-11 09:57:31,918][WARN ][env ] [dev-lambda-elastic-1] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536] [2016-07-11 09:57:33,701][INFO ][node ] [dev-lambda-elastic-1] initialized [2016-07-11 09:57:33,701][INFO ][node ] [dev-lambda-elastic-1] starting ... [2016-07-11 09:57:33,797][INFO ][transport ] [dev-lambda-elastic-1] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2016-07-11 09:57:33,803][INFO ][discovery ] [dev-lambda-elastic-1] dev-lambda-elastic/uKOibrh4S128wXfaOZhO-w [2016-07-11 09:57:38,901][INFO ][cluster.service ] [dev-lambda-elastic-1] new_master {dev-lambda-elastic-1}{uKOibrh4S128wXfaOZhO-w}{127.0.0.1}{127.0.0.1:9300}{data=false, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received) [2016-07-11 09:57:38,921][INFO ][http ] [dev-lambda-elastic-1] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200} [2016-07-11 09:57:38,921][INFO ][node ] [dev-lambda-elastic-1] started [2016-07-11 09:57:38,935][INFO ][gateway ] [dev-lambda-elastic-1] recovered [0] indices into cluster_state

Current YML files where network.publish_host removed

cluster.name: dev-lambda-elastic node.name: dev-lambda-elastic-1 node.master: true node.data: false bootstrap.mlockall: true network host: _ec2_ http.port: 9200 discovery.type: ec2 discovery.zen.minimum_master_nodes: 1 cloud.aws.access_key: XXXXXXXXXXXXXXXXXX cloud.aws.secret_key: XXXXXXXXXXXXXXXXXXX

cluster.name: dev-lambda-elastic node.name: dev-lambda-elastic-2 node.master: false node.data: true bootstrap.mlockall: true network host: _ec2_ http.port: 9200 discovery.type: ec2 discovery.ec2.ping_timeout: "30s" discovery.zen.minimum_master_nodes: 1 cloud.aws.access_key: IIIIIIIIIIIIIIIIII cloud.aws.secret_key: IIIIIIIIIIIIIIIIII

Your posts are not formatted using </>.
That may explain the limit.

I don't understand what is happening. You set network host: _ec2_ but got:

[2016-07-11 09:57:33,797][INFO ][transport ] [dev-lambda-elastic-1] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}

It does not make sense. Are you sure that your configuration is applied?

What happens if you set:

network host: 192.168.0.1

It should fail.