NoNodeAvailableException problem

Hi,

I'm having trouble getting the Java TransportClient to connect to any nodes. I have gone through numerous other topics for similar issues in this forum and on stackoverflow but have so far not been able to figure out what is wrong and I'm now thinking that I must be missing something obvious.

TransportClient is built as follows:

Settings settings = Settings.builder()
                .put("cluster.name", "mycluster")
                .put("client.transport.sniff", true)
                .build();

TransportClient client = new PreBuiltTransportClient(settings).addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("localhost"), 9300));

If I subsequently try to insert data (prepareIndex), I get:

NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{hq-uoZfbSrS4gtqdRo_Kgw}{localhost}{127.0.0.1:9300}]]
	at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:348)
	at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:246)
	at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
	at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)
	at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
	at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)

If I check client.connectedNodes().size() prior to calling the prepareIndex method, it is 0 and when I log details for the client (client.settings().get("cluster.name") and client.transportAddresses()), everything looks fine.

Code is executed on the same (Windows) server where I have Elasticsearch installed. In elasticsearch.yml, I have changed only:

cluster.name: mycluster
node.name: mynode
path.data: H:\elasticsearch\data
path.logs: H:\elasticsearch\logs

Elasticsearch startup log says:

[2017-06-30T08:47:44,134][INFO ][o.e.n.Node               ] [mynode] initializing ...
[2017-06-30T08:47:44,220][INFO ][o.e.e.NodeEnvironment    ] [mynode] using [1] data paths, mounts [[Database (H:)]], net usable_space [14.6gb], net total_space [34.9gb], spins? [unknown], types [NTFS]
[2017-06-30T08:47:44,220][INFO ][o.e.e.NodeEnvironment    ] [mynode] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-06-30T08:47:44,239][INFO ][o.e.n.Node               ] [mynode] node name [mynode], node ID [D8URygCRTpeG144biyG9Nw]
[2017-06-30T08:47:44,239][INFO ][o.e.n.Node               ] [mynode] version[5.4.2], pid[6032], build[929b078/2017-06-15T02:29:28.122Z], OS[Windows Server 2008 R2/6.1/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-06-30T08:47:44,240][INFO ][o.e.n.Node               ] [mynode] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=E:\elasticsearch-5.4.2, -Des.default.path.logs=E:\elasticsearch-5.4.2\logs, -Des.default.path.data=E:\elasticsearch-5.4.2\data, -Des.default.path.conf=E:\elasticsearch-5.4.2\config, exit, -Xms2048m, -Xmx2048m, -Xss1024k]
[2017-06-30T08:47:45,177][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [aggs-matrix-stats]
[2017-06-30T08:47:45,177][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [ingest-common]
[2017-06-30T08:47:45,177][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [lang-expression]
[2017-06-30T08:47:45,177][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [lang-groovy]
[2017-06-30T08:47:45,177][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [lang-mustache]
[2017-06-30T08:47:45,177][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [lang-painless]
[2017-06-30T08:47:45,178][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [percolator]
[2017-06-30T08:47:45,178][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [reindex]
[2017-06-30T08:47:45,178][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [transport-netty3]
[2017-06-30T08:47:45,178][INFO ][o.e.p.PluginsService     ] [mynode] loaded module [transport-netty4]
[2017-06-30T08:47:45,178][INFO ][o.e.p.PluginsService     ] [mynode] no plugins loaded
[2017-06-30T08:47:46,665][INFO ][o.e.d.DiscoveryModule    ] [mynode] using discovery type [zen]
[2017-06-30T08:47:47,235][INFO ][o.e.n.Node               ] [mynode] initialized
[2017-06-30T08:47:47,235][INFO ][o.e.n.Node               ] [mynode] starting ...
[2017-06-30T08:47:47,696][INFO ][o.e.t.TransportService   ] [mynode] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2017-06-30T08:47:50,758][INFO ][o.e.c.s.ClusterService   ] [mynode] new_master {mynode}{D8URygCRTpeG144biyG9Nw}{Lex1Zd6YTLaBm_FLuQlMBA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-06-30T08:47:51,093][INFO ][o.e.g.GatewayService     ] [mynode] recovered [1] indices into cluster_state
[2017-06-30T08:47:51,424][INFO ][o.e.c.r.a.AllocationService] [mynode] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[stepdata][4]] ...]).
[2017-06-30T08:47:51,504][INFO ][o.e.h.n.Netty4HttpServerTransport] [mynode] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2017-06-30T08:47:51,506][INFO ][o.e.n.Node               ] [mynode] started

I'm using transport-5.4.2.jar and have tried both version 5.4.3 and 5.4.2 of Elasticsearch.
REST API is working fine from the local server (I can e.g. index data without issues).

I have tried running royrusso-elasticsearch-HQ and here everything also seems fine (except for the yellow health status also reported in the log). The page however hangs forever when I try to get detailed information for "mynode"...

I have tried changing the transport port but to no avail. Further I tried increasing the Elasticsearch logging level to "DEBUG" (via REST) but nothing gets logged when the TransportClient fails connecting to any nodes.

Any help would be much appreciated.
Thanks in advance.

Can you try with 127.0.0.1 instead and make sure you have no firewall?

Thanks, David.

When I log the client transportAddresses as shown below, I get '127.0.0.1:9300' so I don't think the problem is with "localhost" (in the exception it also shows "{localhost}{127.0.0.1:9300}").

String transportAddresses = "";
for (int i = 0; i < client.transportAddresses().size(); i++) {
    if (i != 0) {
        transportAddresses += ", ";
    }
    TransportAddress transportAddress = client.transportAddresses().get(i);
    transportAddresses += "'" + transportAddress.getAddress() + ":" + transportAddress.getPort() + "'";
}

Firewall might be a problem and honestly I don't know much about this. I thought that it couldn't be an issue since everything happens on the same server, but it is very likely that I am wrong and therefore I have just tried turning the Windows firewall completely off. This unfortunately didn't change anything.

Is there any way I can get extended logging for what goes on on the client side?

Well. If you are running under windows, this is typically happening often. If so, make sure the firewall is not applied to java process on port 9300.

I don't know about logs

HTH

:slight_smile: As mentioned, it should be completely disabled now. Further, I can "telnet localhost 9300" on the machine.

Argh. I misread.

Then I have no other ideas.
Could you try the Low Level REST Client and check that you can actually communicate with the node using 9200 port?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.