We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(AnalyzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction.java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Just to clarify -- the hashingFulltext analyzer is a custom one we
built, but we see this even when performing _analyze requests with the
standard analyzer etc.
We also don't see this when making _analyze requests to the root /
_analyze endpoint, only when making requests to _analyze on a specific
index.
We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
Looks like you have mixed different versions of ES, I think
clint
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(AnalyzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction.java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Turns out that was the case, although the cluster itself was all 0.19.
Because we're using unicast and running logstash on these hosts, there
was was some cross connection going on that shouldn't have been due to
port overlap.
We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
Looks like you have mixed different versions of ES, I think
clint
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.
We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
Looks like you have mixed different versions of ES, I think
clint
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Yes, thats how it works by default, if port 9300 is busy, it will try the next one. This allows to easily start several nodes on the same machine, or have a client and a server running on the same machine.
On Monday, March 5, 2012 at 2:16 PM, Grant wrote:
It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.
We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
Looks like you have mixed different versions of ES, I think
clint
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Hint: to avoid the clutter of several ES versions on the same server
hardware, I use always a cluster name with the version in the setup,
e.g. "elasticsearch-0.18.5", instead of default "elasticsearch".
Yes, thats how it works by default, if port 9300 is busy, it will try the next one. This allows to easily start several nodes on the same machine, or have a client and a server running on the same machine.
On Monday, March 5, 2012 at 2:16 PM, Grant wrote:
It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.
We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
Looks like you have mixed different versions of ES, I think
clint
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Hint: to avoid the clutter of several ES versions on the same server
hardware, I use always a cluster name with the version in the setup,
e.g. "elasticsearch-0.18.5", instead of default "elasticsearch".
Yes, thats how it works by default, if port 9300 is busy, it will try the next one. This allows to easily start several nodes on the same machine, or have a client and a server running on the same machine.
On Monday, March 5, 2012 at 2:16 PM, Grant wrote:
It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.
We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.
Ideas?
Looks like you have mixed different versions of ES, I think
clint
[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.