Cluster stuck at yellow state after upgrade to 0.19.7


(Eran Kutner-2) #1

Hi,
I upgraded from 0.19.3 to 0.19.7 hoping to resolve
https://github.com/elasticsearch/elasticsearch/issues/2042, now some of my
nodes seem unable to properly communicate with the rest of the nodes. The
log shows:

[2012-07-01 08:51:06,113][WARN ][transport.netty ] [es1-aws-01]
Exception caught on netty layer [[id: 0x6876fb1b, /10.2.101.151:52136 =>
/10.1.101.153:9300]]
org.elasticsearch.ElasticSearchIllegalStateException: stream marked as
compressed, but no compressor found
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:225)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:154)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:103)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)

and:
[2012-07-01 08:56:14,325][DEBUG][action.admin.cluster.node.stats]
[es1-aws-01] failed to execute on node [EyOnnBFgTqel62LEKa2PlA]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize
response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:272)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:249)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:154)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:103)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.io.EOFException
at
org.elasticsearch.common.compress.CompressedStreamInput.readByte(CompressedStreamInput.java:81)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:26)
at
org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:201)
at
org.elasticsearch.action.admin.cluster.node.stats.NodeStats.readFrom(NodeStats.java:283)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:270)
... 17 more

The nodes that successfully talk to each other are of one kind (CentOS 6 of
phisical hardware) while those that don't are from another kind (Amazon
CentOS 6 image on AWS). I'm guessing there might be something missing in
the environment that was changed between 0.19.3 and 0.19.7. Any idea what?

Thanks,
Eran


(Nitish Sharma) #2

I am also looking at similar set of errors while upgrading from 0.19.2 to
0.19.7.
Any suggestions on how to fix it?

On Sunday, July 1, 2012 3:05:21 PM UTC+2, Eran wrote:

Hi,
I upgraded from 0.19.3 to 0.19.7 hoping to resolve
https://github.com/elasticsearch/elasticsearch/issues/2042, now some of
my nodes seem unable to properly communicate with the rest of the nodes.
The log shows:

[2012-07-01 08:51:06,113][WARN ][transport.netty ] [es1-aws-01]
Exception caught on netty layer [[id: 0x6876fb1b, /10.2.101.151:52136 => /
10.1.101.153:9300]]
org.elasticsearch.ElasticSearchIllegalStateException: stream marked as
compressed, but no compressor found
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:225)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:154)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:103)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)

and:
[2012-07-01 08:56:14,325][DEBUG][action.admin.cluster.node.stats]
[es1-aws-01] failed to execute on node [EyOnnBFgTqel62LEKa2PlA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type
[org.elasticsearch.action.admin.cluster.node.stats.NodeStats]
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:272)
at
org.elasticsearch.transport.netty.MessageChannelHandler.process(MessageChannelHandler.java:249)
at
org.elasticsearch.transport.netty.MessageChannelHandler.callDecode(MessageChannelHandler.java:154)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:103)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:563)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.io.EOFException
at
org.elasticsearch.common.compress.CompressedStreamInput.readByte(CompressedStreamInput.java:81)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:26)
at
org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:201)
at
org.elasticsearch.action.admin.cluster.node.stats.NodeStats.readFrom(NodeStats.java:283)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:270)
... 17 more

The nodes that successfully talk to each other are of one kind (CentOS 6
of phisical hardware) while those that don't are from another kind (Amazon
CentOS 6 image on AWS). I'm guessing there might be something missing in
the environment that was changed between 0.19.3 and 0.19.7. Any idea what?

Thanks,
Eran


(Shay Banon) #3

With the help of ns509 on IRC, managed to track this down, opened an issue:
https://github.com/elasticsearch/elasticsearch/issues/2076 and will be
fixed in 0.19.8, which will be released tomorrow.

On Sun, Jul 1, 2012 at 5:31 PM, Nitish Sharma sharmanitishdutt@gmail.comwrote:

I am also looking at similar set of errors while upgrading from 0.19.2 to
0.19.7.
Any suggestions on how to fix it?

On Sunday, July 1, 2012 3:05:21 PM UTC+2, Eran wrote:

Hi,
I upgraded from 0.19.3 to 0.19.7 hoping to resolve https://github.com/**
elasticsearch/elasticsearch/**issues/2042https://github.com/elasticsearch/elasticsearch/issues/2042,
now some of my nodes seem unable to properly communicate with the rest of
the nodes. The log shows:

[2012-07-01 08:51:06,113][WARN ][transport.netty ] [es1-aws-01]
Exception caught on netty layer [[id: 0x6876fb1b, /10.2.101.151:52136 =>
/10.1.101.153:9300]]
org.elasticsearch.ElasticSearchIllegalStateException: stream marked
as compressed, but no compressor found
at org.elasticsearch.transport.netty.MessageChannelHandler.
process(MessageChannelHandler.**java:225)
at org.elasticsearch.transport.netty.MessageChannelHandler.
callDecode(**MessageChannelHandler.java:**154)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(**MessageChannelHandler.java:**103)
at org.elasticsearch.common.netty.channel.
SimpleChannelUpstreamHandler.handleUpstream(
SimpleChannelUpstreamHandler.**java:75)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
563)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:255)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:**91)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.**processSelectedKeys(**AbstractNioWorker.java:373)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(**AbstractNioWorker.java:247)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:**35)
at org.elasticsearch.common.netty.util.
ThreadRenamingRunnable.run(**ThreadRenamingRunnable.java:**102)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(**DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.**java:679)

and:
[2012-07-01 08:56:14,325][DEBUG][action.**admin.cluster.node.stats]
[es1-aws-01] failed to execute on node [EyOnnBFgTqel62LEKa2PlA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.action.

admin.cluster.node.stats.NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize response of type [org.elasticsearch.action.

admin.cluster.node.stats.**NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(**MessageChannelHandler.java:**272)
at org.elasticsearch.transport.netty.MessageChannelHandler.
process(MessageChannelHandler.**java:249)
at org.elasticsearch.transport.netty.MessageChannelHandler.
callDecode(**MessageChannelHandler.java:**154)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(**MessageChannelHandler.java:**103)
at org.elasticsearch.common.netty.channel.
SimpleChannelUpstreamHandler.handleUpstream(
SimpleChannelUpstreamHandler.**java:75)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
563)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:255)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:**91)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.**processSelectedKeys(**AbstractNioWorker.java:373)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(**AbstractNioWorker.java:247)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:**35)
at org.elasticsearch.common.netty.util.
ThreadRenamingRunnable.run(**ThreadRenamingRunnable.java:**102)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(**DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.**java:679)
Caused by: java.io.EOFException
at org.elasticsearch.common.**compress.CompressedStreamInput.
readByte(**CompressedStreamInput.java:81)
at org.elasticsearch.common.io.stream.AdapterStreamInput.
readByte(AdapterStreamInput.**java:26)
at org.elasticsearch.common.io.stream.StreamInput.
readBoolean(StreamInput.java:**201)
at org.elasticsearch.action.admin.cluster.node.stats.
NodeStats.readFrom(NodeStats.**java:283)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(**MessageChannelHandler.java:**270)
... 17 more

The nodes that successfully talk to each other are of one kind (CentOS 6
of phisical hardware) while those that don't are from another kind (Amazon
CentOS 6 image on AWS). I'm guessing there might be something missing in
the environment that was changed between 0.19.3 and 0.19.7. Any idea what?

Thanks,
Eran


(Eran Kutner-2) #4

Tried building from 0.19.8 sources. It did make the exception go away but
the cluster remained stuck with many shards in
initializing/relocating/unassigned status.

On Sunday, July 1, 2012 9:27:20 PM UTC+3, kimchy wrote:

With the help of ns509 on IRC, managed to track this down, opened an
issue: https://github.com/elasticsearch/elasticsearch/issues/2076 and
will be fixed in 0.19.8, which will be released tomorrow.

On Sun, Jul 1, 2012 at 5:31 PM, Nitish Sharma wrote:

I am also looking at similar set of errors while upgrading from 0.19.2 to
0.19.7.
Any suggestions on how to fix it?

On Sunday, July 1, 2012 3:05:21 PM UTC+2, Eran wrote:

Hi,
I upgraded from 0.19.3 to 0.19.7 hoping to resolve https://github.com/**
elasticsearch/elasticsearch/**issues/2042https://github.com/elasticsearch/elasticsearch/issues/2042,
now some of my nodes seem unable to properly communicate with the rest of
the nodes. The log shows:

[2012-07-01 08:51:06,113][WARN ][transport.netty ] [es1-aws-01]
Exception caught on netty layer [[id: 0x6876fb1b, /10.2.101.151:52136=> /
10.1.101.153:9300]]
org.elasticsearch.ElasticSearchIllegalStateException: stream marked
as compressed, but no compressor found
at org.elasticsearch.transport.netty.MessageChannelHandler.
process(MessageChannelHandler.**java:225)
at org.elasticsearch.transport.netty.MessageChannelHandler.
callDecode(**MessageChannelHandler.java:**154)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(**MessageChannelHandler.java:**103)
at org.elasticsearch.common.netty.channel.
SimpleChannelUpstreamHandler.handleUpstream(
SimpleChannelUpstreamHandler.**java:75)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
563)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:255)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:**91)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.**processSelectedKeys(**AbstractNioWorker.java:373)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(**AbstractNioWorker.java:247)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:**35)
at org.elasticsearch.common.netty.util.
ThreadRenamingRunnable.run(**ThreadRenamingRunnable.java:**102)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(**DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.**java:679)

and:
[2012-07-01 08:56:14,325][DEBUG][action.**admin.cluster.node.stats]
[es1-aws-01] failed to execute on node [EyOnnBFgTqel62LEKa2PlA]
org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize response of type [org.elasticsearch.action.

admin.cluster.node.stats.**NodeStats]
Caused by: org.elasticsearch.transport.TransportSerializationExceptio
n: Failed to deserialize response of type [org.elasticsearch.action.

admin.cluster.node.stats.**NodeStats]
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(**MessageChannelHandler.java:**272)
at org.elasticsearch.transport.netty.MessageChannelHandler.
process(MessageChannelHandler.**java:249)
at org.elasticsearch.transport.netty.MessageChannelHandler.
callDecode(**MessageChannelHandler.java:**154)
at org.elasticsearch.transport.netty.MessageChannelHandler.
messageReceived(**MessageChannelHandler.java:**103)
at org.elasticsearch.common.netty.channel.
SimpleChannelUpstreamHandler.handleUpstream(
SimpleChannelUpstreamHandler.**java:75)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
563)
at org.elasticsearch.common.netty.channel.
DefaultChannelPipeline.**sendUpstream(DefaultChannelPipeline.java:
558)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:268)
at org.elasticsearch.common.netty.channel.Channels.
fireMessageReceived(Channels.**java:255)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.read(NioWorker.java:**91)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.**processSelectedKeys(**AbstractNioWorker.java:373)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.run(**AbstractNioWorker.java:247)
at org.elasticsearch.common.netty.channel.socket.nio.
NioWorker.run(NioWorker.java:**35)
at org.elasticsearch.common.netty.util.
ThreadRenamingRunnable.run(**ThreadRenamingRunnable.java:**102)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(**DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.**java:679)
Caused by: java.io.EOFException
at org.elasticsearch.common.**compress.*CompressedStreamInput.
*readByte(**CompressedStreamInput.java:81)
at org.elasticsearch.common.io.stream.AdapterStreamInput.
readByte(AdapterStreamInput.**java:26)
at org.elasticsearch.common.io.stream.StreamInput.
readBoolean(StreamInput.java:**201)
at org.elasticsearch.action.admin.cluster.node.stats.
NodeStats.readFrom(NodeStats.**java:283)
at org.elasticsearch.transport.netty.MessageChannelHandler.
handleResponse(**MessageChannelHandler.java:**270)
... 17 more

The nodes that successfully talk to each other are of one kind (CentOS 6
of phisical hardware) while those that don't are from another kind (Amazon
CentOS 6 image on AWS). I'm guessing there might be something missing in
the environment that was changed between 0.19.3 and 0.19.7. Any idea what?

Thanks,
Eran


(system) #5