Can anyone explain what is going on? Did I code the following Java
wrong or did someone move some cables or port configuration on me?
- I can use http REST from a Firefox browser and get information from
an index:
http://172.16.0.164:9200/test_index0_copy/_status?pretty=1
{
"ok" : true,
"_shards" : {
"total" : 9,
"successful" : 3,
"failed" : 0
},
"indices" : {
"test_index0_copy" : {
"index" : {
"primary_size" : "521.4mb",
"primary_size_in_bytes" : 546827732,
"size" : "521.4mb",
"size_in_bytes" : 546827732
},
"translog" : {
"operations" : 0
},
...
and
http://172.16.0.164:9200/_cluster/nodes/stats?pretty=true
shows
{
"cluster_name" : "metajure",
"nodes" : {
"Nfs_5wldQEi0TO9QR0a1Cg" : {
"timestamp" : 1345072330400,
"name" : "Zartra",
"transport_address" : "inet[/172.16.0.164:9300]",
and
curl -XGET
'http://172.16.0.164:9200/test_index0_copy/_search?q=text:company&pretty=true'
gets a page of results
But
2. I can NOT connect from the same machine via the Java API using the
following code (using the other appropriate port, 9300 as shown) running
in Eclipse.
private static TransportClient connectToElasticSearch() {
Settings settings;
// once we find one node in the cluster ask about the others
Builder settingsBuilder = ImmutableSettings.settingsBuilder().put("client.transport.sniff", true);
settingsBuilder.put("cluster.name", "metajure");
settingsBuilder.put("client.transport.ping_timeout", "10s");
settings = settingsBuilder.build();
return new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress("172.16.0.164", 9300));
}
The above code dies when it tries to addTransportAddress and I get the
trace listed below:
Does the line just above the stack trace:
"DEBUG - [Contemplator] connected to node
[[#transport#-1][inet[/172.16.0.164:9300]]]"
Actual mean it got there, but that something got garbled on the return
as stated in the next line:
"INFO - [Contemplator] failed to get local cluster state for
[#transport#-1][inet[/172.16.0.164:9300]], disconnecting..."
?
Any suggestions on how to track down where the problem lies would be
appreciated. There are NO extra JVMs running on the machine running the
above code.
-Paul
INFO - [Contemplator] loaded [], sites []
DEBUG - using [UnsafeChunkDecoder] decoder
DEBUG - [Contemplator] creating thread_pool [generic], type [cached],
keep_alive [30s]
DEBUG - [Contemplator] creating thread_pool [index], type [cached],
keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [bulk], type [cached],
keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [get], type [cached],
keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [search], type [cached],
keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [percolate], type [cached],
keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [management], type
[scaling], min [1], size [5], keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [flush], type [scaling], min
[1], size [10], keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [merge], type [scaling], min
[1], size [20], keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [refresh], type [cached],
keep_alive [1m]
DEBUG - [Contemplator] creating thread_pool [cache], type [scaling], min
[1], size [4], keep_alive [5m]
DEBUG - [Contemplator] creating thread_pool [snapshot], type [scaling],
min [1], size [5], keep_alive [5m]
DEBUG - [Contemplator] using worker_count[16], port[9300-9400],
bind_host[null], publish_host[null], compress[false],
connect_timeout[30s], connections_per_node[2/6/1]
DEBUG - [Contemplator] node_sampler_interval[5s]
DEBUG - Using the autodetected NIO constraint level: 0
DEBUG - [Contemplator] adding address
[[#transport#-1][inet[/172.16.0.164:9300]]]
DEBUG - [Contemplator] connected to node
[[#transport#-1][inet[/172.16.0.164:9300]]]
INFO - [Contemplator] failed to get local cluster state for
[#transport#-1][inet[/172.16.0.164:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:150)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:127)
at
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at
org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at
org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 110
at
org.jboss.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:126)
at
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:178)
at
org.elasticsearch.common.io.stream.StreamInput.readUTF(StreamInput.java:207)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:50)
at
org.elasticsearch.cluster.routing.ImmutableShardRouting.readFromThin(ImmutableShardRouting.java:200)
at
org.elasticsearch.cluster.routing.ImmutableShardRouting.readFrom(ImmutableShardRouting.java:189)
at
org.elasticsearch.cluster.routing.ImmutableShardRouting.readShardRoutingEntry(ImmutableShardRouting.java:182)
at
org.elasticsearch.cluster.routing.IndexShardRoutingTable$Builder.readFromThin(IndexShardRoutingTable.java:463)
at
org.elasticsearch.cluster.routing.IndexRoutingTable$Builder.readFrom(IndexRoutingTable.java:256)
at
org.elasticsearch.cluster.routing.RoutingTable$Builder.readFrom(RoutingTable.java:387)
at
org.elasticsearch.cluster.ClusterState$Builder.readFrom(ClusterState.java:252)
at
org.elasticsearch.action.admin.cluster.state.ClusterStateResponse.readFrom(ClusterStateResponse.java:66)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:148)
... 22 more
-Paul
--