No Node Available

Hi -- I'm trying to write a simple Java client to write to an elastic
search index. I have a two node cluster running:

$ curl -XGET http://:9210/_cluster/health

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

One node is a Mac, the other a Ubuntu VM, both using ports 9210 and 9310. I
can add and retrieve docs from the command line using curl from both nodes:

$ curl -XPOST :9210/my_index/my_item/222 -d '{"white": "black"}'

{"ok":true,"_index":"my_index","_type":"my_item","_id":"222","_version":1}bosmac01:1
rsimon$

$ curl :9210/my_index/_search -d '{"query": {"term": {"white":
"black"}}}'

{"took":8,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":1,"max_score":0.30685282,"hits":[{"_index":"my_index","_type":"my_item","_id":"222","_score":0.30685282,
"_source" : {"white": "black"}}]}}

My Java code looks like this:

Settings settings = ImmutableSettings.settingsBuilder()

                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)

        //        .addTransportAddress(new 

InetSocketTransportAddress("hostname-1", 9310))

        //        .addTransportAddress(new 

InetSocketTransportAddress("hostname-2", 9310));

                .addTransportAddress(new 

InetSocketTransportAddress("IP-1", 9310))

                .addTransportAddress(new 

InetSocketTransportAddress("IP-2", 9310));

            IndexResponse response = client.prepareIndex(indexName, 

docType, "1") ETC.

I have tried the host names and the IP addresses. I've double-checked the
cluster name and port numbers. The prepareIndex method always fails with a
"No Node Available" exception. Examining the "client" object in a debugger
always shows zero nodes.

Using elastic search 0.19.4.

Any thoughts? Thanks.

--

It typically happens when name of the cluster in the client settings is
incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

On Friday, November 30, 2012 2:27:09 PM UTC-5, Rich wrote:

Hi -- I'm trying to write a simple Java client to write to an elastic
search index. I have a two node cluster running:

$ curl -XGET http://:9210/_cluster/health

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

One node is a Mac, the other a Ubuntu VM, both using ports 9210 and 9310.
I can add and retrieve docs from the command line using curl from both
nodes:

$ curl -XPOST :9210/my_index/my_item/222 -d '{"white": "black"}'

{"ok":true,"_index":"my_index","_type":"my_item","_id":"222","_version":1}bosmac01:1
rsimon$

$ curl :9210/my_index/_search -d '{"query": {"term": {"white":
"black"}}}'

{"took":8,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":1,"max_score":0.30685282,"hits":[{"_index":"my_index","_type":"my_item","_id":"222","_score":0.30685282,
"_source" : {"white": "black"}}]}}

My Java code looks like this:

Settings settings = ImmutableSettings.settingsBuilder()

                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)

        //        .addTransportAddress(new 

InetSocketTransportAddress("hostname-1", 9310))

        //        .addTransportAddress(new 

InetSocketTransportAddress("hostname-2", 9310));

                .addTransportAddress(new 

InetSocketTransportAddress("IP-1", 9310))

                .addTransportAddress(new 

InetSocketTransportAddress("IP-2", 9310));

            IndexResponse response = client.prepareIndex(indexName, 

docType, "1") ETC.

I have tried the host names and the IP addresses. I've double-checked the
cluster name and port numbers. The prepareIndex method always fails with a
"No Node Available" exception. Examining the "client" object in a debugger
always shows zero nodes.

Using elastic search 0.19.4.

Any thoughts? Thanks.

--

Hi -- Yes, I know about the cluster name issue, and the name is the same.
I saw an "out of memory" error on one of the nodes, upped the memory (Xmx)
and restarted it. Cluster still shows green (requested status from both
nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client settings is
incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

--

If you don't have logging enabled on the client, could you drop this file
called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c] %m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is the same.
I saw an "out of memory" error on one of the nodes, upped the memory (Xmx)
and restarted it. Cluster still shows green (requested status from both
nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client settings is
incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

--

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins] [Asbestos Man]
loaded [], sites []
[2012-12-03 11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf]
using [UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [generic], type [cached], keep_alive [30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [management], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [flush], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [merge], type [scaling], min [1], size [20],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [refresh], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [cache], type [scaling], min [1], size [4],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [snapshot], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop this file
called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c] %m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is the
same. I saw an "out of memory" error on one of the nodes, upped the memory
(Xmx) and restarted it. Cluster still shows green (requested status from
both nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client settings is
incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

--

Looks like I managed to cut off the beginning on my post:

The Java code is:

       Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)
                .addTransportAddress(new 

InetSocketTransportAddress("", 9310));

            IndexResponse response = client.prepareIndex(indexName, 

docType, "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "try out
indexing")
.endObject()
).execute().actionGet();

and I wondered if I needed to supply more info in the settings call. Also,
I checked that the cluster was still green and responded to post and get
commands.

On Monday, December 3, 2012 11:27:48 AM UTC-5, Rich wrote:

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins] [Asbestos Man]
loaded [], sites []
[2012-12-03 11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf]
using [UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [generic], type [cached], keep_alive [30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [management], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [flush], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [merge], type [scaling], min [1], size [20],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [refresh], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [cache], type [scaling], min [1], size [4],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [snapshot], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop this file
called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c]
%m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is the
same. I saw an "out of memory" error on one of the nodes, upped the memory
(Xmx) and restarted it. Cluster still shows green (requested status from
both nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client settings is
incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

--

It looks your client can connect to the port 9310 but it's not getting
response back from the elasticsearch server, so it waits for 5 seconds and
disconnects. It would happen if somebody would try to connect to port 9200
using transport client, for example.

Could you run

curl ":9210/_cluster/nodes?pretty=true"

to make sure that the elasticsearch server is really listens on the port
9310.

And also check the log file on the server to see if there are any errors in
there.

On Monday, December 3, 2012 11:31:01 AM UTC-5, Rich wrote:

Looks like I managed to cut off the beginning on my post:

The Java code is:

       Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)
                .addTransportAddress(new 

InetSocketTransportAddress("", 9310));

            IndexResponse response = client.prepareIndex(indexName, 

docType, "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "try out
indexing")
.endObject()
).execute().actionGet();

and I wondered if I needed to supply more info in the settings call. Also,
I checked that the cluster was still green and responded to post and get
commands.

On Monday, December 3, 2012 11:27:48 AM UTC-5, Rich wrote:

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins] [Asbestos
Man] loaded [], sites []
[2012-12-03 11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf]
using [UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [generic], type [cached], keep_alive [30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [management], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [flush], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [merge], type [scaling], min [1], size [20],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [refresh], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [cache], type [scaling], min [1], size [4],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [snapshot], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop this
file called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c]
%m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is the
same. I saw an "out of memory" error on one of the nodes, upped the memory
(Xmx) and restarted it. Cluster still shows green (requested status from
both nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client settings
is incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

--

curl ":9210/_cluster/nodes?pretty=true"
{
"ok" : true,
"cluster_name" : "elasticsearch_rsimon",
"nodes" : {
"S6ne967gSHmFs_MAiH0naA" : {
"name" : "mac_elastic",
"transport_address" : "inet[:9310]",
"hostname" : "bosmac01.local",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
},
"qyV0kCQAQiu1uwuNQveADg" : {
"name" : "elasticsearch-test-ubuntu",
"transport_address" : "inet[:9310]",
"hostname" : "vBox-New",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
}
}
}

Doing the same command using "" yields the same results. I see
nothing unusual in the log for (which is teh node I'm trying to
connect to in the Java code), but I again see this in the log for :

java.lang.OutOfMemoryError: Java heap space
[2012-12-03 11:15:56,614][WARN ][transport.netty ] [mac_elastic]
Exception caught on netty layer [[id: 0x4049cab1, /10.115.100.56:57628 =>
/10.115.100.56:9310]]

I had seen this before I and thought I fixed it by editing the LaunchAgent
file on the Mac ( is a Mac, is a Ubuntu VM). The launch file
looks like (I added the Xmx line):

more homebrew.mxcl.elasticsearch.plist

<?xml version="1.0" encoding="UTF-8"?> KeepAlive Label homebrew.mxcl.elasticsearch ProgramArguments /usr/local/bin/elasticsearch -f -D es.config=/usr/local/Cellar/elasticsearch/0.19.4/config/elasticsearch.yml EnvironmentVariables ES_JAVA_OPTS -Xss200000 -Xmx4096m RunAtLoad UserName rsimon WorkingDirectory /usr/local/var StandardErrorPath /dev/null StandardOutPath /dev/null

I wonder if I should just kill the Mac node, and just use the VM node. I
son't need two nodes, I was just experimenting.

On Monday, December 3, 2012 11:48:53 AM UTC-5, Igor Motov wrote:

It looks your client can connect to the port 9310 but it's not getting
response back from the elasticsearch server, so it waits for 5 seconds and
disconnects. It would happen if somebody would try to connect to port 9200
using transport client, for example.

Could you run

curl ":9210/_cluster/nodes?pretty=true"

to make sure that the elasticsearch server is really listens on the port
9310.

And also check the log file on the server to see if there are any errors
in there.

--

curl ":9210/_cluster/nodes?pretty=true"
{
"ok" : true,
"cluster_name" : "elasticsearch_rsimon",
"nodes" : {
"S6ne967gSHmFs_MAiH0naA" : {
"name" : "mac_elastic",
"transport_address" : "inet[:9310]",
"hostname" : "bosmac01.local",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
},
"qyV0kCQAQiu1uwuNQveADg" : {
"name" : "elasticsearch-test-ubuntu",
"transport_address" : "inet[:9310]",
"hostname" : "vBox-New",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
}
}
}

Doing the same command using "" yields the same results. I see
nothing unusual in the log for (which is teh node I'm trying to
connect to in the Java code), but I again see this in the log for :

java.lang.OutOfMemoryError: Java heap space
[2012-12-03 11:15:56,614][WARN ][transport.netty ] [mac_elastic]
Exception caught on netty layer [[id: 0x4049cab1, /:57628 =>
/:9310]]

I had seen this before I and thought I fixed it by editing the LaunchAgent
file on the Mac ( is a Mac, is a Ubuntu VM). The launch file
looks like (I added the Xmx line):

more homebrew.mxcl.elasticsearch.plist

<?xml version="1.0" encoding="UTF-8"?> KeepAlive Label homebrew.mxcl.elasticsearch ProgramArguments /usr/local/bin/elasticsearch -f -D es.config=/usr/local/Cellar/elasticsearch/0.19.4/config/elasticsearch.yml EnvironmentVariables ES_JAVA_OPTS -Xss200000 -Xmx4096m RunAtLoad UserName rsimon WorkingDirectory /usr/local/var StandardErrorPath /dev/null StandardOutPath /dev/null

I wonder if I should just kill the Mac node, and just use the VM node. I
son't need two nodes, I was just experimenting.

On Monday, December 3, 2012 11:48:53 AM UTC-5, Igor Motov wrote:

It looks your client can connect to the port 9310 but it's not getting
response back from the elasticsearch server, so it waits for 5 seconds and
disconnects. It would happen if somebody would try to connect to port 9200
using transport client, for example.

Could you run

curl ":9210/_cluster/nodes?pretty=true"

to make sure that the elasticsearch server is really listens on the port
9310.

And also check the log file on the server to see if there are any errors
in there.

On Monday, December 3, 2012 11:31:01 AM UTC-5, Rich wrote:

Looks like I managed to cut off the beginning on my post:

The Java code is:

       Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)
                .addTransportAddress(new 

InetSocketTransportAddress("", 9310));

            IndexResponse response = client.prepareIndex(indexName, 

docType, "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "try out
indexing")
.endObject()
).execute().actionGet();

and I wondered if I needed to supply more info in the settings call.
Also, I checked that the cluster was still green and responded to post and
get commands.

On Monday, December 3, 2012 11:27:48 AM UTC-5, Rich wrote:

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins] [Asbestos
Man] loaded [], sites []
[2012-12-03 11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf]
using [UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [generic], type [cached], keep_alive [30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [percolate], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [management], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [flush], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [merge], type [scaling], min [1], size [20],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [refresh], type [scaling], min [1], size [10],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [cache], type [scaling], min [1], size [4],
keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool] [Asbestos
Man] creating thread_pool [snapshot], type [scaling], min [1], size [5],
keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop this
file called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c]
%m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is the
same. I saw an "out of memory" error on one of the nodes, upped the memory
(Xmx) and restarted it. Cluster still shows green (requested status from
both nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client settings
is incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

--

You can typically verify memory settings by simply running

ps -aef | grep elasticsearch

if settings are applied correctly, you will see them in the elasticsearch
process command line.

By the way, I noticed that your elasticsearch server version is 0.19.4 but
it looks like your client has a newer version. It shouldn't cause such a
complete connection failure, but it might be a good idea to fix that
anyway.

Could you also find this line in log file on the server side and make sure
that the bound_address is reachable from your client.

[2012-12-03 12:47:49,111][INFO ][transport ] [Ever]
bound_address {inet[/???????:9300]}, publish_address {inet[/???????:9300]}

Besides this and obvious things like firewall issues, I am not really sure
what to check.

On Monday, December 3, 2012 12:28:09 PM UTC-5, Rich wrote:

curl ":9210/_cluster/nodes?pretty=true"
{
"ok" : true,
"cluster_name" : "elasticsearch_rsimon",
"nodes" : {
"S6ne967gSHmFs_MAiH0naA" : {
"name" : "mac_elastic",
"transport_address" : "inet[:9310]",
"hostname" : "bosmac01.local",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
},
"qyV0kCQAQiu1uwuNQveADg" : {
"name" : "elasticsearch-test-ubuntu",
"transport_address" : "inet[:9310]",
"hostname" : "vBox-New",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
}
}
}

Doing the same command using "" yields the same results. I see
nothing unusual in the log for (which is teh node I'm trying to
connect to in the Java code), but I again see this in the log for :

java.lang.OutOfMemoryError: Java heap space
[2012-12-03 11:15:56,614][WARN ][transport.netty ] [mac_elastic]
Exception caught on netty layer [[id: 0x4049cab1, /:57628 =>
/:9310]]

I had seen this before I and thought I fixed it by editing the LaunchAgent
file on the Mac ( is a Mac, is a Ubuntu VM). The launch file
looks like (I added the Xmx line):

more homebrew.mxcl.elasticsearch.plist

<?xml version="1.0" encoding="UTF-8"?> KeepAlive Label homebrew.mxcl.elasticsearch ProgramArguments /usr/local/bin/elasticsearch -f -D es.config=/usr/local/Cellar/elasticsearch/0.19.4/config/elasticsearch.yml EnvironmentVariables ES_JAVA_OPTS -Xss200000 -Xmx4096m RunAtLoad UserName rsimon WorkingDirectory /usr/local/var StandardErrorPath /dev/null StandardOutPath /dev/null

I wonder if I should just kill the Mac node, and just use the VM node. I
son't need two nodes, I was just experimenting.

On Monday, December 3, 2012 11:48:53 AM UTC-5, Igor Motov wrote:

It looks your client can connect to the port 9310 but it's not getting
response back from the elasticsearch server, so it waits for 5 seconds and
disconnects. It would happen if somebody would try to connect to port 9200
using transport client, for example.

Could you run

curl ":9210/_cluster/nodes?pretty=true"

to make sure that the elasticsearch server is really listens on the port
9310.

And also check the log file on the server to see if there are any errors
in there.

On Monday, December 3, 2012 11:31:01 AM UTC-5, Rich wrote:

Looks like I managed to cut off the beginning on my post:

The Java code is:

       Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)
                .addTransportAddress(new 

InetSocketTransportAddress("", 9310));

            IndexResponse response = client.prepareIndex(indexName, 

docType, "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "try out
indexing")
.endObject()
).execute().actionGet();

and I wondered if I needed to supply more info in the settings call.
Also, I checked that the cluster was still green and responded to post and
get commands.

On Monday, December 3, 2012 11:27:48 AM UTC-5, Rich wrote:

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins] [Asbestos
Man] loaded [], sites []
[2012-12-03 11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf]
using [UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [generic], type [cached], keep_alive
[30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [percolate], type [cached], keep_alive
[5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [management], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [flush], type [scaling], min [1], size
[10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [merge], type [scaling], min [1], size
[20], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [refresh], type [scaling], min [1],
size [10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [cache], type [scaling], min [1], size
[4], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [snapshot], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2012-12-03 11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop this
file called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c]
%m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is the
same. I saw an "out of memory" error on one of the nodes, upped the memory
(Xmx) and restarted it. Cluster still shows green (requested status from
both nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node
gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client settings
is incorrect. Do you have logging enable for the client process? Do you see
anything in the log?

--

Thanks. I looked at the logs for both machines, searching for "publish" --
the Ubuntu VM has good entries, but the Mac VM has no entries like that.
So, I think the Mac install is faulty (even though the cluster is green,
and I can post/get from the Mac instance). I installed the Ubuntu
"manually" using instructions on the web, but I used homebrew for the Mac.
The homebrew install was trivially easy, but now I don't trust it.

I uninstalled ES on the Mac, and killed the process. Cluster health now
shows yellow, as expected, and I can get items from the Ubuntu instance.
And it looks like the VM took over correctly:

[2012-12-03 13:52:48,819][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: initializing ..

[2012-12-03 13:52:48,824][INFO ][plugins ]
[elasticsearch-test-ubuntu] loaded [], sites []

[2012-12-03 13:52:51,383][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: initialized

[2012-12-03 13:52:51,383][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: starting ...

[2012-12-03 13:52:51,490][INFO ][transport ]
[elasticsearch-test-ubuntu] bound_address {inet[/:9310]},
publish_address {inet[/:9310]}

[2012-12-03 13:52:54,503][INFO ][cluster.service ]
[elasticsearch-test-ubuntu] new_master [elasticsearch-test-u

buntu][kXjeF_hQQ6-Bo4D9t-YvCA][inet[/10.115.100.209:9310]]{master=true},
reason: zen-disco-join (elected_as_master)

[2012-12-03 13:52:54,549][INFO ][discovery ]
[elasticsearch-test-ubuntu] elasticsearch_rsimon/kXjeF_hQQ6-

Bo4D9t-YvCA

[2012-12-03 13:52:54,575][INFO ][http ]
[elasticsearch-test-ubuntu] bound_address {inet[/:9210]},
publish_address {inet[/:9210]}

[2012-12-03 13:52:54,576][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: started

[2012-12-03 13:52:55,376][INFO ][gateway ]
[elasticsearch-test-ubuntu] recovered [2] indices into cluster_state

I also noticed that homebrew apparently pulled down a different version of
ES than was installed on the VM (as you noticed), so I changed my Java
project to use the same ES jar. However, I still get the same exceptions
(ReceiveTimeoutTransportException and no available node).

I can ssh from my Mac (where the Java project lives) to the VM, so I don't
think it's a permission/firewall thing, but I haven't completely discarded
that possibility.

I'm wondering if I need to give more information in the settingsBuilder
call.

Anyway, thanks for your time. I'll keep going, and report any breakthroughs
here.

-Rich

On Monday, December 3, 2012 1:00:16 PM UTC-5, Igor Motov wrote:

You can typically verify memory settings by simply running

ps -aef | grep elasticsearch

if settings are applied correctly, you will see them in the elasticsearch
process command line.

By the way, I noticed that your elasticsearch server version is 0.19.4 but
it looks like your client has a newer version. It shouldn't cause such a
complete connection failure, but it might be a good idea to fix that
anyway.

Could you also find this line in log file on the server side and make sure
that the bound_address is reachable from your client.

[2012-12-03 12:47:49,111][INFO ][transport ] [Ever]
bound_address {inet[/???????:9300]}, publish_address {inet[/???????:9300]}

Besides this and obvious things like firewall issues, I am not really sure
what to check.

On Monday, December 3, 2012 12:28:09 PM UTC-5, Rich wrote:

curl ":9210/_cluster/nodes?pretty=true"
{
"ok" : true,
"cluster_name" : "elasticsearch_rsimon",
"nodes" : {
"S6ne967gSHmFs_MAiH0naA" : {
"name" : "mac_elastic",
"transport_address" : "inet[:9310]",
"hostname" : "bosmac01.local",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
},
"qyV0kCQAQiu1uwuNQveADg" : {
"name" : "elasticsearch-test-ubuntu",
"transport_address" : "inet[:9310]",
"hostname" : "vBox-New",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
}
}
}

Doing the same command using "" yields the same results. I see
nothing unusual in the log for (which is teh node I'm trying to
connect to in the Java code), but I again see this in the log for :

java.lang.OutOfMemoryError: Java heap space
[2012-12-03 11:15:56,614][WARN ][transport.netty ] [mac_elastic]
Exception caught on netty layer [[id: 0x4049cab1, /:57628 =>
/:9310]]

I had seen this before I and thought I fixed it by editing the
LaunchAgent file on the Mac ( is a Mac, is a Ubuntu VM). The
launch file looks like (I added the Xmx line):

more homebrew.mxcl.elasticsearch.plist

<?xml version="1.0" encoding="UTF-8"?> KeepAlive Label homebrew.mxcl.elasticsearch ProgramArguments /usr/local/bin/elasticsearch -f -D es.config=/usr/local/Cellar/elasticsearch/0.19.4/config/elasticsearch.yml EnvironmentVariables ES_JAVA_OPTS -Xss200000 -Xmx4096m RunAtLoad UserName rsimon WorkingDirectory /usr/local/var StandardErrorPath /dev/null StandardOutPath /dev/null

I wonder if I should just kill the Mac node, and just use the VM node. I
son't need two nodes, I was just experimenting.

On Monday, December 3, 2012 11:48:53 AM UTC-5, Igor Motov wrote:

It looks your client can connect to the port 9310 but it's not getting
response back from the elasticsearch server, so it waits for 5 seconds and
disconnects. It would happen if somebody would try to connect to port 9200
using transport client, for example.

Could you run

curl ":9210/_cluster/nodes?pretty=true"

to make sure that the elasticsearch server is really listens on the port
9310.

And also check the log file on the server to see if there are any errors
in there.

On Monday, December 3, 2012 11:31:01 AM UTC-5, Rich wrote:

Looks like I managed to cut off the beginning on my post:

The Java code is:

       Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)
                .addTransportAddress(new 

InetSocketTransportAddress("", 9310));

            IndexResponse response = client.prepareIndex(indexName, 

docType, "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "try out
indexing")
.endObject()
).execute().actionGet();

and I wondered if I needed to supply more info in the settings call.
Also, I checked that the cluster was still green and responded to post and
get commands.

On Monday, December 3, 2012 11:27:48 AM UTC-5, Rich wrote:

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins] [Asbestos
Man] loaded [], sites []
[2012-12-03
11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf] using
[UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [generic], type [cached], keep_alive
[30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [percolate], type [cached], keep_alive
[5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [management], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [flush], type [scaling], min [1], size
[10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [merge], type [scaling], min [1], size
[20], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [refresh], type [scaling], min [1],
size [10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [cache], type [scaling], min [1], size
[4], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [snapshot], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03
11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select
timeout of 500
[2012-12-03
11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug
workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop this
file called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c]
%m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is the
same. I saw an "out of memory" error on one of the nodes, upped the memory
(Xmx) and restarted it. Cluster still shows green (requested status from
both nodes), and I can still post/retrieve to both nodes. But my code still
claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node
gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client
settings is incorrect. Do you have logging enable for the client process?
Do you see anything in the log?

--

Your client code looks fine. To me it looks like some sort of connectivity
issue. Can you move your client to ubuntu and try running it there?

On Monday, December 3, 2012 3:28:08 PM UTC-5, Rich wrote:

Thanks. I looked at the logs for both machines, searching for "publish" --
the Ubuntu VM has good entries, but the Mac VM has no entries like that.
So, I think the Mac install is faulty (even though the cluster is green,
and I can post/get from the Mac instance). I installed the Ubuntu
"manually" using instructions on the web, but I used homebrew for the Mac.
The homebrew install was trivially easy, but now I don't trust it.

I uninstalled ES on the Mac, and killed the process. Cluster health now
shows yellow, as expected, and I can get items from the Ubuntu instance.
And it looks like the VM took over correctly:

[2012-12-03 13:52:48,819][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: initializing ..

[2012-12-03 13:52:48,824][INFO ][plugins ]
[elasticsearch-test-ubuntu] loaded [], sites []

[2012-12-03 13:52:51,383][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: initialized

[2012-12-03 13:52:51,383][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: starting ...

[2012-12-03 13:52:51,490][INFO ][transport ]
[elasticsearch-test-ubuntu] bound_address {inet[/:9310]},
publish_address {inet[/:9310]}

[2012-12-03 13:52:54,503][INFO ][cluster.service ]
[elasticsearch-test-ubuntu] new_master [elasticsearch-test-u

buntu][kXjeF_hQQ6-Bo4D9t-YvCA][inet[/10.115.100.209:9310]]{master=true},
reason: zen-disco-join (elected_as_master)

[2012-12-03 13:52:54,549][INFO ][discovery ]
[elasticsearch-test-ubuntu] elasticsearch_rsimon/kXjeF_hQQ6-

Bo4D9t-YvCA

[2012-12-03 13:52:54,575][INFO ][http ]
[elasticsearch-test-ubuntu] bound_address {inet[/:9210]},
publish_address {inet[/:9210]}

[2012-12-03 13:52:54,576][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: started

[2012-12-03 13:52:55,376][INFO ][gateway ]
[elasticsearch-test-ubuntu] recovered [2] indices into cluster_state

I also noticed that homebrew apparently pulled down a different version of
ES than was installed on the VM (as you noticed), so I changed my Java
project to use the same ES jar. However, I still get the same exceptions
(ReceiveTimeoutTransportException and no available node).

I can ssh from my Mac (where the Java project lives) to the VM, so I don't
think it's a permission/firewall thing, but I haven't completely discarded
that possibility.

I'm wondering if I need to give more information in the settingsBuilder
call.

Anyway, thanks for your time. I'll keep going, and report any
breakthroughs here.

-Rich

On Monday, December 3, 2012 1:00:16 PM UTC-5, Igor Motov wrote:

You can typically verify memory settings by simply running

ps -aef | grep elasticsearch

if settings are applied correctly, you will see them in the elasticsearch
process command line.

By the way, I noticed that your elasticsearch server version is 0.19.4
but it looks like your client has a newer version. It shouldn't cause such
a complete connection failure, but it might be a good idea to fix that
anyway.

Could you also find this line in log file on the server side and make
sure that the bound_address is reachable from your client.

[2012-12-03 12:47:49,111][INFO ][transport ] [Ever]
bound_address {inet[/???????:9300]}, publish_address {inet[/???????:9300]}

Besides this and obvious things like firewall issues, I am not really
sure what to check.

On Monday, December 3, 2012 12:28:09 PM UTC-5, Rich wrote:

curl ":9210/_cluster/nodes?pretty=true"
{
"ok" : true,
"cluster_name" : "elasticsearch_rsimon",
"nodes" : {
"S6ne967gSHmFs_MAiH0naA" : {
"name" : "mac_elastic",
"transport_address" : "inet[:9310]",
"hostname" : "bosmac01.local",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
},
"qyV0kCQAQiu1uwuNQveADg" : {
"name" : "elasticsearch-test-ubuntu",
"transport_address" : "inet[:9310]",
"hostname" : "vBox-New",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
}
}
}

Doing the same command using "" yields the same results. I see
nothing unusual in the log for (which is teh node I'm trying to
connect to in the Java code), but I again see this in the log for :

java.lang.OutOfMemoryError: Java heap space
[2012-12-03 11:15:56,614][WARN ][transport.netty ]
[mac_elastic] Exception caught on netty layer [[id: 0x4049cab1,
/:57628 => /:9310]]

I had seen this before I and thought I fixed it by editing the
LaunchAgent file on the Mac ( is a Mac, is a Ubuntu VM). The
launch file looks like (I added the Xmx line):

more homebrew.mxcl.elasticsearch.plist

<?xml version="1.0" encoding="UTF-8"?> KeepAlive Label homebrew.mxcl.elasticsearch ProgramArguments /usr/local/bin/elasticsearch -f -D es.config=/usr/local/Cellar/elasticsearch/0.19.4/config/elasticsearch.yml EnvironmentVariables ES_JAVA_OPTS -Xss200000 -Xmx4096m RunAtLoad UserName rsimon WorkingDirectory /usr/local/var StandardErrorPath /dev/null StandardOutPath /dev/null

I wonder if I should just kill the Mac node, and just use the VM node. I
son't need two nodes, I was just experimenting.

On Monday, December 3, 2012 11:48:53 AM UTC-5, Igor Motov wrote:

It looks your client can connect to the port 9310 but it's not getting
response back from the elasticsearch server, so it waits for 5 seconds and
disconnects. It would happen if somebody would try to connect to port 9200
using transport client, for example.

Could you run

curl ":9210/_cluster/nodes?pretty=true"

to make sure that the elasticsearch server is really listens on the
port 9310.

And also check the log file on the server to see if there are any
errors in there.

On Monday, December 3, 2012 11:31:01 AM UTC-5, Rich wrote:

Looks like I managed to cut off the beginning on my post:

The Java code is:

       Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)
                .addTransportAddress(new 

InetSocketTransportAddress("", 9310));

            IndexResponse response = 

client.prepareIndex(indexName, docType, "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "try out
indexing")
.endObject()
).execute().actionGet();

and I wondered if I needed to supply more info in the settings call.
Also, I checked that the cluster was still green and responded to post and
get commands.

On Monday, December 3, 2012 11:27:48 AM UTC-5, Rich wrote:

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins] [Asbestos
Man] loaded [], sites []
[2012-12-03
11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf] using
[UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [generic], type [cached], keep_alive
[30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [percolate], type [cached], keep_alive
[5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [management], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [flush], type [scaling], min [1], size
[10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [merge], type [scaling], min [1], size
[20], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [refresh], type [scaling], min [1],
size [10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [cache], type [scaling], min [1], size
[4], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [snapshot], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03
11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select
timeout of 500
[2012-12-03
11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug
workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop this
file called log4j.properties somewhere in your clients classpath with the
following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c]
%m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is
the same. I saw an "out of memory" error on one of the nodes, upped the
memory (Xmx) and restarted it. Cluster still shows green (requested status
from both nodes), and I can still post/retrieve to both nodes. But my code
still claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node
gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client
settings is incorrect. Do you have logging enable for the client process?
Do you see anything in the log?

--

I was thinking of that, too. I have to attend to other tasks first, so I
can't continue investigating for a bit. If I make any progress, I'll report
it here.

Thanks.

On Monday, December 3, 2012 3:41:51 PM UTC-5, Igor Motov wrote:

Your client code looks fine. To me it looks like some sort of connectivity
issue. Can you move your client to ubuntu and try running it there?

On Monday, December 3, 2012 3:28:08 PM UTC-5, Rich wrote:

Thanks. I looked at the logs for both machines, searching for "publish"
-- the Ubuntu VM has good entries, but the Mac VM has no entries like that.
So, I think the Mac install is faulty (even though the cluster is green,
and I can post/get from the Mac instance). I installed the Ubuntu
"manually" using instructions on the web, but I used homebrew for the Mac.
The homebrew install was trivially easy, but now I don't trust it.

I uninstalled ES on the Mac, and killed the process. Cluster health now
shows yellow, as expected, and I can get items from the Ubuntu instance.
And it looks like the VM took over correctly:

[2012-12-03 13:52:48,819][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: initializing ..

[2012-12-03 13:52:48,824][INFO ][plugins ]
[elasticsearch-test-ubuntu] loaded [], sites []

[2012-12-03 13:52:51,383][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: initialized

[2012-12-03 13:52:51,383][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: starting ...

[2012-12-03 13:52:51,490][INFO ][transport ]
[elasticsearch-test-ubuntu] bound_address {inet[/:9310]},
publish_address {inet[/:9310]}

[2012-12-03 13:52:54,503][INFO ][cluster.service ]
[elasticsearch-test-ubuntu] new_master [elasticsearch-test-u

buntu][kXjeF_hQQ6-Bo4D9t-YvCA][inet[/10.115.100.209:9310]]{master=true},
reason: zen-disco-join (elected_as_master)

[2012-12-03 13:52:54,549][INFO ][discovery ]
[elasticsearch-test-ubuntu] elasticsearch_rsimon/kXjeF_hQQ6-

Bo4D9t-YvCA

[2012-12-03 13:52:54,575][INFO ][http ]
[elasticsearch-test-ubuntu] bound_address {inet[/:9210]},
publish_address {inet[/:9210]}

[2012-12-03 13:52:54,576][INFO ][node ]
[elasticsearch-test-ubuntu] {0.19.10}[8372]: started

[2012-12-03 13:52:55,376][INFO ][gateway ]
[elasticsearch-test-ubuntu] recovered [2] indices into cluster_state

I also noticed that homebrew apparently pulled down a different version
of ES than was installed on the VM (as you noticed), so I changed my Java
project to use the same ES jar. However, I still get the same exceptions
(ReceiveTimeoutTransportException and no available node).

I can ssh from my Mac (where the Java project lives) to the VM, so I
don't think it's a permission/firewall thing, but I haven't completely
discarded that possibility.

I'm wondering if I need to give more information in the settingsBuilder
call.

Anyway, thanks for your time. I'll keep going, and report any
breakthroughs here.

-Rich

On Monday, December 3, 2012 1:00:16 PM UTC-5, Igor Motov wrote:

You can typically verify memory settings by simply running

ps -aef | grep elasticsearch

if settings are applied correctly, you will see them in the
elasticsearch process command line.

By the way, I noticed that your elasticsearch server version is 0.19.4
but it looks like your client has a newer version. It shouldn't cause such
a complete connection failure, but it might be a good idea to fix that
anyway.

Could you also find this line in log file on the server side and make
sure that the bound_address is reachable from your client.

[2012-12-03 12:47:49,111][INFO ][transport ] [Ever]
bound_address {inet[/???????:9300]}, publish_address {inet[/???????:9300]}

Besides this and obvious things like firewall issues, I am not really
sure what to check.

On Monday, December 3, 2012 12:28:09 PM UTC-5, Rich wrote:

curl ":9210/_cluster/nodes?pretty=true"
{
"ok" : true,
"cluster_name" : "elasticsearch_rsimon",
"nodes" : {
"S6ne967gSHmFs_MAiH0naA" : {
"name" : "mac_elastic",
"transport_address" : "inet[:9310]",
"hostname" : "bosmac01.local",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
},
"qyV0kCQAQiu1uwuNQveADg" : {
"name" : "elasticsearch-test-ubuntu",
"transport_address" : "inet[:9310]",
"hostname" : "vBox-New",
"http_address" : "inet[:9210]",
"attributes" : {
"master" : "true"
}
}
}
}

Doing the same command using "" yields the same results. I see
nothing unusual in the log for (which is teh node I'm trying to
connect to in the Java code), but I again see this in the log for :

java.lang.OutOfMemoryError: Java heap space
[2012-12-03 11:15:56,614][WARN ][transport.netty ]
[mac_elastic] Exception caught on netty layer [[id: 0x4049cab1,
/:57628 => /:9310]]

I had seen this before I and thought I fixed it by editing the
LaunchAgent file on the Mac ( is a Mac, is a Ubuntu VM). The
launch file looks like (I added the Xmx line):

more homebrew.mxcl.elasticsearch.plist

<?xml version="1.0" encoding="UTF-8"?> KeepAlive Label homebrew.mxcl.elasticsearch ProgramArguments /usr/local/bin/elasticsearch -f -D es.config=/usr/local/Cellar/elasticsearch/0.19.4/config/elasticsearch.yml EnvironmentVariables ES_JAVA_OPTS -Xss200000 -Xmx4096m RunAtLoad UserName rsimon WorkingDirectory /usr/local/var StandardErrorPath /dev/null StandardOutPath /dev/null

I wonder if I should just kill the Mac node, and just use the VM node.
I son't need two nodes, I was just experimenting.

On Monday, December 3, 2012 11:48:53 AM UTC-5, Igor Motov wrote:

It looks your client can connect to the port 9310 but it's not getting
response back from the elasticsearch server, so it waits for 5 seconds and
disconnects. It would happen if somebody would try to connect to port 9200
using transport client, for example.

Could you run

curl ":9210/_cluster/nodes?pretty=true"

to make sure that the elasticsearch server is really listens on the
port 9310.

And also check the log file on the server to see if there are any
errors in there.

On Monday, December 3, 2012 11:31:01 AM UTC-5, Rich wrote:

Looks like I managed to cut off the beginning on my post:

The Java code is:

       Settings settings = ImmutableSettings.settingsBuilder()
                .put("cluster.name", clusterName).build();

        Client client = new TransportClient(settings)
                .addTransportAddress(new 

InetSocketTransportAddress("", 9310));

            IndexResponse response = 

client.prepareIndex(indexName, docType, "1")
.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "try out
indexing")
.endObject()
).execute().actionGet();

and I wondered if I needed to supply more info in the settings call.
Also, I checked that the cluster was still green and responded to post and
get commands.

On Monday, December 3, 2012 11:27:48 AM UTC-5, Rich wrote:

parameters in the settings call?

Thanks for your help --

Console output (IP obfuscated to ):

[2012-12-03 11:18:06,632][INFO ][org.elasticsearch.plugins]
[Asbestos Man] loaded [], sites []
[2012-12-03
11:18:06,645][DEBUG][org.elasticsearch.common.compress.lzf] using
[UnsafeChunkDecoder] decoder
[2012-12-03 11:18:07,146][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [generic], type [cached], keep_alive
[30s]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [index], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [bulk], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [get], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [search], type [cached], keep_alive [5m]
[2012-12-03 11:18:07,151][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [percolate], type [cached], keep_alive
[5m]
[2012-12-03 11:18:07,152][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [management], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [flush], type [scaling], min [1], size
[10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [merge], type [scaling], min [1], size
[20], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [refresh], type [scaling], min [1],
size [10], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [cache], type [scaling], min [1], size
[4], keep_alive [5m]
[2012-12-03 11:18:07,154][DEBUG][org.elasticsearch.threadpool]
[Asbestos Man] creating thread_pool [snapshot], type [scaling], min [1],
size [5], keep_alive [5m]
[2012-12-03 11:18:07,172][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] using worker_count[8], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_per_node[2/6/1], receive_predictor[512kb->512kb]
[2012-12-03 11:18:07,174][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] node_sampler_interval[5s]
[2012-12-03
11:18:07,193][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using
the autodetected NIO constraint level: 0
[2012-12-03
11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select
timeout of 500
[2012-12-03
11:18:07,203][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug
workaround enabled = false
[2012-12-03 11:18:07,227][DEBUG][org.elasticsearch.client.transport]
[Asbestos Man] adding address [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:07,271][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,280][INFO ][org.elasticsearch.client.transport]
[Asbestos Man] failed to get node info for
[#transport#-1][inet[/:9310]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/:9310]][cluster/nodes/info] request_id [0] timed out after
[5002ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
[2012-12-03 11:18:12,284][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] disconnected from [[#transport#-1][inet[/:9310]]]
[2012-12-03 11:18:12,312][DEBUG][org.elasticsearch.transport.netty]
[Asbestos Man] connected to node [[#transport#-1][inet[/:9310]]]
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:80)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:308)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at
com.glgroup.search.importhandler.MosaicIndexer.createIndexFromMongo(MosaicIndexer.java:79)
at
com.glgroup.search.importhandler.MosaicIndexer.main(MosaicIndexer.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Error getting
MongoDB:org.elasticsearch.client.transport.NoNodeAvailableException: No
node available

Process finished with exit code 1

On Monday, December 3, 2012 8:20:30 AM UTC-5, Igor Motov wrote:

If you don't have logging enabled on the client, could you drop
this file called log4j.properties somewhere in your clients classpath with
the following content:

log4j.rootLogger=DEBUG, out

log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c]
%m%n

restart you client and post here what it prints on the console?

On Monday, December 3, 2012 8:11:35 AM UTC-5, Rich wrote:

Hi -- Yes, I know about the cluster name issue, and the name is
the same. I saw an "out of memory" error on one of the nodes, upped the
memory (Xmx) and restarted it. Cluster still shows green (requested status
from both nodes), and I can still post/retrieve to both nodes. But my code
still claims no node is available.

Doing curl -XGET http://:9210/_cluster/health on either node
gives:

{"cluster_name":"elasticsearch_rsimon","status":"green","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":10,"active_shards":20,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}

My code does this:

Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();

where clustname is passed in as an argument:

"elasticsearch_rsimon", // elasticsearch cluster name

-Rich

On Friday, November 30, 2012 5:39:10 PM UTC-5, Igor Motov wrote:

It typically happens when name of the cluster in the client
settings is incorrect. Do you have logging enable for the client process?
Do you see anything in the log?

--