ES 0.19.0 errors


(Grant) #1

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(AnalyzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction.java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(Grant) #2

Also a lot of this:

[2012-03-04 16:02:30,936][WARN ][transport.netty ] [prod-es-
r01] Message not fully read (request) for [3435365] and action
[indices/analyze/s], resetting
[2012-03-04 16:02:30,943][WARN ][transport.netty ] [prod-es-
r01] Message not fully read (request) for [4273675] and action
[indices/analyze/s], resetting
[2012-03-04 16:02:30,949][WARN ][transport.netty ] [prod-es-
r01] Message not fully read (request) for [3380164] and action
[indices/analyze/s], resetting
[2012-03-04 16:02:30,971][WARN ][transport.netty ] [prod-es-
r01] Message not fully read (request) for [3526609] and action
[indices/analyze/s], resetting
[2012-03-04 16:02:30,983][WARN ][transport.netty ] [prod-es-
r01] Message not fully read (request) for [2030640] and action
[indices/analyze/s], resetting
[2012-03-04 16:02:31,002][WARN ][transport.netty ] [prod-es-
r01] Message not fully read (request) for [3435372] and action
[indices/analyze/s], resetting
[2012-03-04 16:02:31,008][WARN ][transport.netty ] [prod-es-
r01] Message not fully read (request) for [4273682] and action
[indices/analyze/s], resetting


(Matthew A. Brown) #3

Just to clarify -- the hashingFulltext analyzer is a custom one we
built, but we see this even when performing _analyze requests with the
standard analyzer etc.

We also don't see this when making _analyze requests to the root /
_analyze endpoint, only when making requests to _analyze on a specific
index.

On Mar 4, 4:31 pm, Grant gr...@brewster.com wrote:

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(Clinton Gormley) #4

On Sun, 2012-03-04 at 13:31 -0800, Grant wrote:

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

Looks like you have mixed different versions of ES, I think

clint

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStreamInput.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(AnalyzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction.java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(Grant) #5

Turns out that was the case, although the cluster itself was all 0.19.
Because we're using unicast and running logstash on these hosts, there
was was some cross connection going on that shouldn't have been due to
port overlap.

On Mar 5, 4:47 am, Clinton Gormley cl...@traveljury.com wrote:

On Sun, 2012-03-04 at 13:31 -0800, Grant wrote:

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

Looks like you have mixed different versions of ES, I think

clint

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(Grant) #6

It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.

On Mar 5, 4:47 am, Clinton Gormley cl...@traveljury.com wrote:

On Sun, 2012-03-04 at 13:31 -0800, Grant wrote:

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

Looks like you have mixed different versions of ES, I think

clint

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(Shay Banon) #7

Yes, thats how it works by default, if port 9300 is busy, it will try the next one. This allows to easily start several nodes on the same machine, or have a client and a server running on the same machine.

On Monday, March 5, 2012 at 2:16 PM, Grant wrote:

It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.

On Mar 5, 4:47 am, Clinton Gormley <cl...@traveljury.com (http://traveljury.com)> wrote:

On Sun, 2012-03-04 at 13:31 -0800, Grant wrote:

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

Looks like you have mixed different versions of ES, I think

clint

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(Jörg Prante) #8

Hint: to avoid the clutter of several ES versions on the same server
hardware, I use always a cluster name with the version in the setup,
e.g. "elasticsearch-0.18.5", instead of default "elasticsearch".

http://www.elasticsearch.org/guide/reference/modules/discovery/index.html

Jörg

On Mar 5, 3:53 pm, Shay Banon kim...@gmail.com wrote:

Yes, thats how it works by default, if port 9300 is busy, it will try the next one. This allows to easily start several nodes on the same machine, or have a client and a server running on the same machine.

On Monday, March 5, 2012 at 2:16 PM, Grant wrote:

It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.

On Mar 5, 4:47 am, Clinton Gormley <cl...@traveljury.com (http://traveljury.com)> wrote:

On Sun, 2012-03-04 at 13:31 -0800, Grant wrote:

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

Looks like you have mixed different versions of ES, I think

clint

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(Ivan Brusic) #9

That is a great suggestion. We need a Wiki/cookbook to catalog all
these great tiny solutions.

--
Ivan

On Mon, Mar 5, 2012 at 7:04 AM, jprante joergprante@gmail.com wrote:

Hint: to avoid the clutter of several ES versions on the same server
hardware, I use always a cluster name with the version in the setup,
e.g. "elasticsearch-0.18.5", instead of default "elasticsearch".

http://www.elasticsearch.org/guide/reference/modules/discovery/index.html

Jörg

On Mar 5, 3:53 pm, Shay Banon kim...@gmail.com wrote:

Yes, thats how it works by default, if port 9300 is busy, it will try the next one. This allows to easily start several nodes on the same machine, or have a client and a server running on the same machine.

On Monday, March 5, 2012 at 2:16 PM, Grant wrote:

It does seem to me though that ES should fail to start if you're
specifically defining transport.port: 9300, but something else is
listening on that address already. Instead it selected the next
available port.

On Mar 5, 4:47 am, Clinton Gormley <cl...@traveljury.com (http://traveljury.com)> wrote:

On Sun, 2012-03-04 at 13:31 -0800, Grant wrote:

We're seeing all kinds of these errors. Initially I thought I'd missed
upgrading a node in the cluster, but that turns out not to have been
the case. Any ideas? It's triggered with a query of /index/_analyze?
text=some_text&analyzer=hashingFulltext. The weird part is that on
initial cluster creation, this error isn't triggered. And in fact our
staging env isn't throwing this either, but there are only a handful
of indexes there.

Ideas?

Looks like you have mixed different versions of ES, I think

clint

[2012-03-04 20:02:46,561][DEBUG][action.admin.indices.analyze] [prod-
es-r01] failed to execute
[org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest@10d72878]
org.elasticsearch.transport.RemoteTransportException: [prod-es-r04]
[inet[/10.180.35.110:9300]][indices/analyze/s]
Caused by: java.io.IOException: Expected handle header, got [111]
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readUTF(HandlesStream Input.java:
63)
at
org.elasticsearch.action.admin.indices.analyze.AnalyzeRequest.readFrom(Anal yzeRequest.java:
150)
at
org.elasticsearch.action.support.single.custom.TransportSingleCustomOperati onAction
$ShardSingleOperationRequest.readFrom(TransportSingleCustomOperationAction. java:
359)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(Messa geChannelHandler.java:
313)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChan nelHandler.java:
217)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageC hannelHandler.java:
141)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(Mes sageChannelHandler.java:
95)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleU pstream(SimpleChannelUpstreamHandler.java:
75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
777)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChann elsHandler.java:
74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
558)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream( DefaultChannelPipeline.java:
553)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channel s.java:
255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker. java:
343)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelected Keys(NioWorker.java:
274)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.j ava:
194)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
102)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)


(system) #10