How do I NOT have to go through the following if I restart ES:
My log file gets filled with this and it takes ES 15 minutes to wind out
this crap.
[2013-07-11 16:19:26,085][WARN ][indices.cluster ] [Strange,
Victor] [alarms][2] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: [alarms][2]:
Recovery failed from [Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10
.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]
at
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:293)
at
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:62)
at
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:163)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.elasticsearch.transport.RemoteTransportException:
[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException:
[alarms][2] Phase[2] Execution failed
at
org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1147)
at
org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:526)
at
org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:116)
at
org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:60)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:328
)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:314
)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:265)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:168)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline
.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:166)
... 23 more
[2013-07-11 16:19:26,085][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][0] state: [CREATED]->[RECOVERING], reason [from [A
ardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]]
[2013-07-11 16:19:26,085][DEBUG][indices.cluster ] [Strange,
Victor] [alarms][2] removing shard (not allocated)
[2013-07-11 16:19:26,086][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][2] state: [RECOVERING]->[CLOSED], reason [removing shard
(not allocated)]
[2013-07-11 16:19:26,086][DEBUG][indices.memory ] [Strange,
Victor] recalculating shard indexing buffer (reason=removed_shard[a
larms][2]), total is [815.4mb] with [2] active shards, each shard set to
[407.7mb]
[2013-07-11 16:19:26,087][DEBUG][index.engine.robin ] [Strange,
Victor] [alarms][0] updating index_buffer_size from [64mb] to [407. 7mb]
[2013-07-11 16:19:26,087][WARN ][cluster.action.shard ] [Strange,
Victor] sending failed shard for [alarms][2], node[LdAs9OtOSQ2IOYD
uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message
[RecoveryFailedException[[alarms][2]: Recovery failed from [Aardwol
f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re
moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[alarms] [2] Phase[2] Execution failed];
nested: RemoteTransportException[Failed to deserialize exception response
from stream]; nested: Transport SerializationException[Failed to
deserialize exception response from stream]; nested:
StreamCorruptedException[unexpected end of block da ta]; ]]
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz
w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: done applying updated
cluster_state
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz
w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: execute
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] cluster state updated, version [1958233], source [zen-disco
-receive(from master
[[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]
[2013-07-11 16:19:26,088][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][1] state: [RECOVERING]->[CLOSED], reason [recovery
failure [RecoveryFailedException[[alarms][1]: Recovery failed from
[Aardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Str ange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested:
RemoteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][in
dex/shard/recovery/startRecovery]]; nested:
RecoveryEngineException[[alarms][1] Phase[2] Execution failed]; nested:
RemoteTransportExcept ion[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception respon se from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]
[2013-07-11 16:19:26,088][DEBUG][indices.memory ] [Strange,
Victor] recalculating shard indexing buffer (reason=removed_shard[a
larms][1]), total is [815.4mb] with [1] active shards, each shard set to
[512mb]
[2013-07-11 16:19:26,088][DEBUG][index.engine.robin ] [Strange,
Victor] [alarms][0] updating index_buffer_size from [407.7mb] to [5 12mb]
[2013-07-11 16:19:26,088][WARN ][cluster.action.shard ] [Strange,
Victor] sending failed shard for [alarms][1], node[LdAs9OtOSQ2IOYD
uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message
[RecoveryFailedException[[alarms][1]: Recovery failed from [Aardwol
f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re
moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[alarms] [1] Phase[2] Execution failed];
nested: RemoteTransportException[Failed to deserialize exception response
from stream]; nested: T
On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:
I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?
James
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.