Cannot connect to elasticsearch via java api

Hi,

I'm trying to connect to a runnung elasticsearch instance via java api:

    Settings settings = ImmutableSettings.settingsBuilder().put(

"cluster.name", "elasticsearch").put(
"client.transport.sniff", true).build();
this.client = new TransportClient(settings).addTransportAddress(new
InetSocketTransportAddress(
"localhost", 9300));

But the connection failes:

2013-02-19 10:06:23,983 [INFO][elasticsearch[Nova-Prime][generic][T#2]][elasticsearch.client.transport]
[Nova-Prime] failed to get local cluster state for
[#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[
localhost/127.0.0.1:9300]][cluster/state] request_id [0] timed out after [
5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(
TransportService.java:342)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:603)
at java.lang.Thread.run(Thread.java:722)

In the elasticsearch log I see this exception:

[10:06:19,023][WARN ][transport.netty ] [master node] exception
caught on netty layer [[id: 0x8920f463, /127.0.0.1:53785 => /127.0.0.1:9300
]]
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException:transport content length received
[1gb] exceeded [914.1mb]
at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(
SizeHeaderFrameDecoder.java:31)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
callDecode(FrameDecoder.java:422)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.
messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.
SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.
java:75)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
DefaultChannelPipeline.java:793)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream
(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:565)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendUpstream(DefaultChannelPipeline.java:560)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(
NioWorker.java:84)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(AbstractNioWorker.java:332)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(
NioWorker.java:35)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:102)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)

What is going wrong?

Thanks in advance
Ulli

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You have set sniff to true, that is, your client may be not connecting
to the address you have specified as a transport address ("localhost",
port 9300).

Note, the address the client is connecting to should be the specified
host in the publish_host setting in each node.

If you want to connect to a multi node cluster, ensure you can reach
every node over the network from your transport client.

In a single node devel setup, you may have not assigned the host name in
/etc/hosts to an IP address. Then you have some options: use 127.0.0.1
instead of "localhost", add the host name to the line where localhost
is declared in /etc/hosts (not recommended for hostname-based network
setups and not always possible), set sniff to false, or write extra code
to find the host name from java and connect to this address instead of
"localhost" - usually something with
InetAddress.getLocalHost().getHostName().

Jörg

Am 19.02.13 10:15, schrieb Ulli:

Hi,

I'm trying to connect to a runnung elasticsearch instance via java api:

|
Settingssettings
=ImmutableSettings.settingsBuilder().put("cluster.name","elasticsearch").put(
"client.transport.sniff",true).build();
this.client
=newTransportClient(settings).addTransportAddress(newInetSocketTransportAddress(
"localhost",9300));

|

But the connection failes:

|
2013-02-1910:06:23,983[INFO][elasticsearch[Nova-Prime][generic][T#2]][elasticsearch.client.transport]
[Nova-Prime] failed to get local cluster state for
[#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException:[][inet[localhost/127.0.0.1:9300]][cluster/state]request_id
[0]timed outafter [5000ms]
at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

|

In the elasticsearch log I see this exception:

|
[10:06:19,023][WARN ][transport.netty ][master node]exception caught
on netty layer [[id:0x8920f463,/127.0.0.1:53785 => /127.0.0.1:9300]]
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException:transport
content length received [1gb]exceeded [914.1mb]
at
org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:31)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

|

What is going wrong?

Thanks in advance
Ulli

You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks for your reply. I tried it with 127.0.0.1 and sniff to false but it
didn' change anything. InetAddress.getLocalHost() throws an
UnknownHostException.
But actually I think the exception in the elasticsearch log indicates that
at least something is arivung on elasticsearch side. But what does this
exception mean?

Am Dienstag, 19. Februar 2013 10:32:06 UTC+1 schrieb Jörg Prante:

You have set sniff to true, that is, your client may be not connecting
to the address you have specified as a transport address ("localhost",
port 9300).

Note, the address the client is connecting to should be the specified
host in the publish_host setting in each node.

If you want to connect to a multi node cluster, ensure you can reach
every node over the network from your transport client.

In a single node devel setup, you may have not assigned the host name in
/etc/hosts to an IP address. Then you have some options: use 127.0.0.1
instead of "localhost", add the host name to the line where localhost
is declared in /etc/hosts (not recommended for hostname-based network
setups and not always possible), set sniff to false, or write extra code
to find the host name from java and connect to this address instead of
"localhost" - usually something with
InetAddress.getLocalHost().getHostName().

Jörg

Am 19.02.13 10:15, schrieb Ulli:

Hi,

I'm trying to connect to a runnung elasticsearch instance via java api:

|
Settingssettings
=ImmutableSettings.settingsBuilder().put("cluster.name","elasticsearch").put(

"client.transport.sniff",true).build();
this.client

=newTransportClient(settings).addTransportAddress(newInetSocketTransportAddress(

"localhost",9300));

|

But the connection failes:

|

2013-02-1910:06:23,983[INFO][elasticsearch[Nova-Prime][generic][T#2]][elasticsearch.client.transport]

[Nova-Prime] failed to get local cluster state for
[#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...

org.elasticsearch.transport.ReceiveTimeoutTransportException:[][inet[localhost/127.0.0.1:9300]][cluster/state]request_id

[0]timed outafter [5000ms]
at

org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)

at 

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

at 

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

at java.lang.Thread.run(Thread.java:722) 

|

In the elasticsearch log I see this exception:

|
[10:06:19,023][WARN ][transport.netty ][master node]exception caught
on netty layer [[id:0x8920f463,/127.0.0.1:53785 => /127.0.0.1:9300]]

org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException:transport

content length received [1gb]exceeded [914.1mb]
at

org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:31)

    at 

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)

    at 

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)

    at 

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)

    at 

org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)

    at 

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)

    at 

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)

    at 

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)

    at 

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)

    at 

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)

    at 

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)

    at 

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)

    at 

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

    at 

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)

    at 

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

|

What is going wrong?

Thanks in advance
Ulli

You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey, do you do any large bulk indexing by any chance?

simon

On Tuesday, February 19, 2013 12:16:30 PM UTC+1, Ulli wrote:

Thanks for your reply. I tried it with 127.0.0.1 and sniff to false but it
didn' change anything. InetAddress.getLocalHost() throws an
UnknownHostException.
But actually I think the exception in the elasticsearch log indicates that
at least something is arivung on elasticsearch side. But what does this
exception mean?

Am Dienstag, 19. Februar 2013 10:32:06 UTC+1 schrieb Jörg Prante:

You have set sniff to true, that is, your client may be not connecting
to the address you have specified as a transport address ("localhost",
port 9300).

Note, the address the client is connecting to should be the specified
host in the publish_host setting in each node.

If you want to connect to a multi node cluster, ensure you can reach
every node over the network from your transport client.

In a single node devel setup, you may have not assigned the host name in
/etc/hosts to an IP address. Then you have some options: use 127.0.0.1
instead of "localhost", add the host name to the line where localhost
is declared in /etc/hosts (not recommended for hostname-based network
setups and not always possible), set sniff to false, or write extra code
to find the host name from java and connect to this address instead of
"localhost" - usually something with
InetAddress.getLocalHost().getHostName().

Jörg

Am 19.02.13 10:15, schrieb Ulli:

Hi,

I'm trying to connect to a runnung elasticsearch instance via java api:

|
Settingssettings
=ImmutableSettings.settingsBuilder().put("cluster.name","elasticsearch").put(

"client.transport.sniff",true).build();
this.client

=newTransportClient(settings).addTransportAddress(newInetSocketTransportAddress(

"localhost",9300));

|

But the connection failes:

|

2013-02-1910:06:23,983[INFO][elasticsearch[Nova-Prime][generic][T#2]][elasticsearch.client.transport]

[Nova-Prime] failed to get local cluster state for
[#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...

org.elasticsearch.transport.ReceiveTimeoutTransportException:[][inet[localhost/127.0.0.1:9300]][cluster/state]request_id

[0]timed outafter [5000ms]
at

org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)

at 

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

at 

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

at java.lang.Thread.run(Thread.java:722) 

|

In the elasticsearch log I see this exception:

|
[10:06:19,023][WARN ][transport.netty ][master node]exception caught
on netty layer [[id:0x8920f463,/127.0.0.1:53785 => /127.0.0.1:9300]]

org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException:transport

content length received [1gb]exceeded [914.1mb]
at

org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:31)

    at 

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)

    at 

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)

    at 

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)

    at 

org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)

    at 

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)

    at 

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)

    at 

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)

    at 

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)

    at 

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)

    at 

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)

    at 

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)

    at 

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

    at 

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)

    at 

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

|

What is going wrong?

Thanks in advance
Ulli

You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I don't do any indexing at all. I have an elasticsearch instance with onedocument in the index and I try to connect to this instance.

Am Dienstag, 19. Februar 2013 12:22:25 UTC+1 schrieb simonw:

Hey, do you do any large bulk indexing by any chance?

simon

On Tuesday, February 19, 2013 12:16:30 PM UTC+1, Ulli wrote:

Thanks for your reply. I tried it with 127.0.0.1 and sniff to false but
it didn' change anything. InetAddress.getLocalHost() throws an
UnknownHostException.
But actually I think the exception in the elasticsearch log indicates
that at least something is arivung on elasticsearch side. But what does
this exception mean?

Am Dienstag, 19. Februar 2013 10:32:06 UTC+1 schrieb Jörg Prante:

You have set sniff to true, that is, your client may be not connecting
to the address you have specified as a transport address ("localhost",
port 9300).

Note, the address the client is connecting to should be the specified
host in the publish_host setting in each node.

If you want to connect to a multi node cluster, ensure you can reach
every node over the network from your transport client.

In a single node devel setup, you may have not assigned the host name in
/etc/hosts to an IP address. Then you have some options: use 127.0.0.1
instead of "localhost", add the host name to the line where localhost
is declared in /etc/hosts (not recommended for hostname-based network
setups and not always possible), set sniff to false, or write extra code
to find the host name from java and connect to this address instead of
"localhost" - usually something with
InetAddress.getLocalHost().getHostName().

Jörg

Am 19.02.13 10:15, schrieb Ulli:

Hi,

I'm trying to connect to a runnung elasticsearch instance via java
api:

|
Settingssettings
=ImmutableSettings.settingsBuilder().put("cluster.name","elasticsearch").put(

"client.transport.sniff",true).build();
this.client

=newTransportClient(settings).addTransportAddress(newInetSocketTransportAddress(

"localhost",9300));

|

But the connection failes:

|

2013-02-1910:06:23,983[INFO][elasticsearch[Nova-Prime][generic][T#2]][elasticsearch.client.transport]

[Nova-Prime] failed to get local cluster state for
[#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...

org.elasticsearch.transport.ReceiveTimeoutTransportException:[][inet[localhost/127.0.0.1:9300]][cluster/state]request_id

[0]timed outafter [5000ms]
at

org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:342)

at 

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

at 

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

at java.lang.Thread.run(Thread.java:722) 

|

In the elasticsearch log I see this exception:

|
[10:06:19,023][WARN ][transport.netty ][master node]exception caught
on netty layer [[id:0x8920f463,/127.0.0.1:53785 => /127.0.0.1:9300]]

org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException:transport

content length received [1gb]exceeded [914.1mb]
at

org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:31)

    at 

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)

    at 

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)

    at 

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:793)

    at 

org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:565)

    at 

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)

    at 

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)

    at 

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)

    at 

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)

    at 

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)

    at 

org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)

    at 

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)

    at 

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)

    at 

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

    at 

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)

    at 

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

|

What is going wrong?

Thanks in advance
Ulli

You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The exception means that you should add your host name to /etc/hosts.

Jörg

On Tuesday, February 19, 2013 12:16:30 PM UTC+1, Ulli wrote:

Thanks for your reply. I tried it with 127.0.0.1 and sniff to false but it
didn' change anything. InetAddress.getLocalHost() throws an
UnknownHostException.
But actually I think the exception in the elasticsearch log indicates that
at least something is arivung on elasticsearch side. But what does this
exception mean?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The host name is in /etc/hosts and I'm not talking about the
UnknownHostException but the TooLongFrameException mentioned in my original
posting.

Am Dienstag, 19. Februar 2013 23:35:23 UTC+1 schrieb Jörg Prante:

The exception means that you should add your host name to /etc/hosts.

Jörg

On Tuesday, February 19, 2013 12:16:30 PM UTC+1, Ulli wrote:

Thanks for your reply. I tried it with 127.0.0.1 and sniff to false but
it didn' change anything. InetAddress.getLocalHost() throws an
UnknownHostException.
But actually I think the exception in the elasticsearch log indicates
that at least something is arivung on elasticsearch side. But what does
this exception mean?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

|TooLongFrameException is thrown if you connect with a client of another
ES version to an ES cluster (protocol mismatch, in recent ES versions
this is cleaned up), or you |confused port 9300 with 9200 (so HTTP
requests are effectivley rejected by the transport protocol layer), or
you do indeed request to transport too large data chunks via Netty, so
that ES refuses to accept it (maximum default is 1/4 of the heap size IIRC).

Jörg

Am 20.02.13 09:41, schrieb Ulli:

The host name is in /etc/hosts and I'm not talking about the
UnknownHostException but the |TooLongFrameExceptionmentioned in my
original posting.|

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.