Re: what's happening when I delete elasticsearch/data/elasticsearch/nodes

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I just changed to 1 shard and 0 replicas for development, could this have
anything to do with it?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

What is the best way to remove all data from an index? If I delete the
index this doesn't seem to remove the data.

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

How do I NOT have to go through the following if I restart ES:

My log file gets filled with this and it takes ES 15 minutes to wind out
this crap.

[2013-07-11 16:19:26,085][WARN ][indices.cluster ] [Strange,
Victor] [alarms][2] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: [alarms][2]:
Recovery failed from [Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10
.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]
at
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:293)
at
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:62)
at
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:163)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.elasticsearch.transport.RemoteTransportException:
[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException:
[alarms][2] Phase[2] Execution failed
at
org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1147)
at
org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:526)
at
org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:116)
at
org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:60)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:328
)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:314
)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:265)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:168)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline
.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:166)
... 23 more
[2013-07-11 16:19:26,085][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][0] state: [CREATED]->[RECOVERING], reason [from [A
ardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]]
[2013-07-11 16:19:26,085][DEBUG][indices.cluster ] [Strange,
Victor] [alarms][2] removing shard (not allocated)
[2013-07-11 16:19:26,086][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][2] state: [RECOVERING]->[CLOSED], reason [removing shard
(not allocated)]
[2013-07-11 16:19:26,086][DEBUG][indices.memory ] [Strange,
Victor] recalculating shard indexing buffer (reason=removed_shard[a
larms][2]), total is [815.4mb] with [2] active shards, each shard set to
[407.7mb]
[2013-07-11 16:19:26,087][DEBUG][index.engine.robin ] [Strange,
Victor] [alarms][0] updating index_buffer_size from [64mb] to [407. 7mb]
[2013-07-11 16:19:26,087][WARN ][cluster.action.shard ] [Strange,
Victor] sending failed shard for [alarms][2], node[LdAs9OtOSQ2IOYD

uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message
[RecoveryFailedException[[alarms][2]: Recovery failed from [Aardwol
f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re
moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[alarms] [2] Phase[2] Execution failed];
nested: RemoteTransportException[Failed to deserialize exception response
from stream]; nested: Transport SerializationException[Failed to
deserialize exception response from stream]; nested:
StreamCorruptedException[unexpected end of block da ta]; ]]
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz
w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: done applying updated
cluster_state
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz
w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: execute
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] cluster state updated, version [1958233], source [zen-disco
-receive(from master
[[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]
[2013-07-11 16:19:26,088][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][1] state: [RECOVERING]->[CLOSED], reason [recovery
failure [RecoveryFailedException[[alarms][1]: Recovery failed from
[Aardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Str ange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested:
RemoteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][in
dex/shard/recovery/startRecovery]]; nested:
RecoveryEngineException[[alarms][1] Phase[2] Execution failed]; nested:
RemoteTransportExcept ion[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception respon se from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]
[2013-07-11 16:19:26,088][DEBUG][indices.memory ] [Strange,
Victor] recalculating shard indexing buffer (reason=removed_shard[a
larms][1]), total is [815.4mb] with [1] active shards, each shard set to
[512mb]
[2013-07-11 16:19:26,088][DEBUG][index.engine.robin ] [Strange,
Victor] [alarms][0] updating index_buffer_size from [407.7mb] to [5 12mb]
[2013-07-11 16:19:26,088][WARN ][cluster.action.shard ] [Strange,
Victor] sending failed shard for [alarms][1], node[LdAs9OtOSQ2IOYD

uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message
[RecoveryFailedException[[alarms][1]: Recovery failed from [Aardwol
f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re
moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[alarms] [1] Phase[2] Execution failed];
nested: RemoteTransportException[Failed to deserialize exception response
from stream]; nested: T

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

curl -XDELETE localhost:9200/yourindex

It does remove your index (sort of rm -rf behind the scene).

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 11 juil. 2013 à 17:24, james armstrong james.la3142@gmail.com a écrit :

What is the best way to remove all data from an index? If I delete the index this doesn't seem to remove the data.

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:
I stop the elasticsearch server, then I delete the nodes, then start the server back up. I get rheems of logging and exceptions. What's happening? I thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Are you mixing ES versions or JVM versions?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 11 juil. 2013 à 18:22, james armstrong james.la3142@gmail.com a écrit :

How do I NOT have to go through the following if I restart ES:

My log file gets filled with this and it takes ES 15 minutes to wind out this crap.

[2013-07-11 16:19:26,085][WARN ][indices.cluster ] [Strange, Victor] [alarms][2] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: [alarms][2]: Recovery failed from [Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10 .5.40.221:9300]] into [Strange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]
at org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:293)
at org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:62)
at org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:163)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.elasticsearch.transport.RemoteTransportException: [Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException: [alarms][2] Phase[2] Execution failed
at org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1147)
at org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:526)
at org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:116)
at org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:60)
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:328 )
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:314 )
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:265)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:168)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline .java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:166)
... 23 more
[2013-07-11 16:19:26,085][DEBUG][index.shard.service ] [Strange, Victor] [alarms][0] state: [CREATED]->[RECOVERING], reason [from [A ardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]]
[2013-07-11 16:19:26,085][DEBUG][indices.cluster ] [Strange, Victor] [alarms][2] removing shard (not allocated)
[2013-07-11 16:19:26,086][DEBUG][index.shard.service ] [Strange, Victor] [alarms][2] state: [RECOVERING]->[CLOSED], reason [removing shard (not allocated)]
[2013-07-11 16:19:26,086][DEBUG][indices.memory ] [Strange, Victor] recalculating shard indexing buffer (reason=removed_shard[a larms][2]), total is [815.4mb] with [2] active shards, each shard set to [407.7mb]
[2013-07-11 16:19:26,087][DEBUG][index.engine.robin ] [Strange, Victor] [alarms][0] updating index_buffer_size from [64mb] to [407. 7mb]
[2013-07-11 16:19:26,087][WARN ][cluster.action.shard ] [Strange, Victor] sending failed shard for [alarms][2], node[LdAs9OtOSQ2IOYD
uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message [RecoveryFailedException[[alarms][2]: Recovery failed from [Aardwol f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]]; nested: RecoveryEngineException[[alarms] [2] Phase[2] Execution failed]; nested: RemoteTransportException[Failed to deserialize exception response from stream]; nested: Transport SerializationException[Failed to deserialize exception response from stream]; nested: StreamCorruptedException[unexpected end of block da ta]; ]]
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange, Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: done applying updated cluster_state
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange, Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: execute
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange, Victor] cluster state updated, version [1958233], source [zen-disco -receive(from master [[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]
[2013-07-11 16:19:26,088][DEBUG][index.shard.service ] [Strange, Victor] [alarms][1] state: [RECOVERING]->[CLOSED], reason [recovery failure [RecoveryFailedException[[alarms][1]: Recovery failed from [Aardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Str ange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: RemoteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][in dex/shard/recovery/startRecovery]]; nested: RecoveryEngineException[[alarms][1] Phase[2] Execution failed]; nested: RemoteTransportExcept ion[Failed to deserialize exception response from stream]; nested: TransportSerializationException[Failed to deserialize exception respon se from stream]; nested: StreamCorruptedException[unexpected end of block data]; ]]
[2013-07-11 16:19:26,088][DEBUG][indices.memory ] [Strange, Victor] recalculating shard indexing buffer (reason=removed_shard[a larms][1]), total is [815.4mb] with [1] active shards, each shard set to [512mb]
[2013-07-11 16:19:26,088][DEBUG][index.engine.robin ] [Strange, Victor] [alarms][0] updating index_buffer_size from [407.7mb] to [5 12mb]
[2013-07-11 16:19:26,088][WARN ][cluster.action.shard ] [Strange, Victor] sending failed shard for [alarms][1], node[LdAs9OtOSQ2IOYD
uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message [RecoveryFailedException[[alarms][1]: Recovery failed from [Aardwol f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]]; nested: RecoveryEngineException[[alarms] [1] Phase[2] Execution failed]; nested: RemoteTransportException[Failed to deserialize exception response from stream]; nested: T

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:
I stop the elasticsearch server, then I delete the nodes, then start the server back up. I get rheems of logging and exceptions. What's happening? I thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

ok I removed my index and then "PUT" the mapping for the opsauto type to
that index. Got "acknowledged=true" on both but my log file is still going
nuts. What is going on?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I don't have any client code. The client providing the JSON docs is
insulated by REST calls. my process....ES client (Java/remote) uses REST to
POST docs to ES repo. I went ahead and ensured that the ES client and ES
engine are using the same JVM. My issue is: when I do: kill engine; cd
$ES_HOME/elasticsearch/data/elasticsearch/; rm -rf ./nodes; then restart
ESEngine...where is ES getting the docs from? is it storing them in a dir
other than ./nodes?

I also DELETE index and PUT index mapping and still get the errors in my
logs. Why? if JVM version mismatch, where is the mismatch?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Thursday, July 11, 2013 10:34:33 AM UTC-6, james armstrong wrote:

I don't have any client code. The client providing the JSON docs is
insulated by REST calls. my process....ES client (Java/remote) uses REST to
POST docs to ES repo. I went ahead and ensured that the ES client and ES
engine are using the same JVM. My issue is: when I do: kill engine; cd
$ES_HOME/elasticsearch/data/elasticsearch/; rm -rf ./nodes; then restart
ESEngine...where is ES getting the docs from? is it storing them in a dir
other than ./nodes?

I also DELETE index and PUT index mapping and still get the errors in my
logs. Why? if JVM version mismatch, where is the mismatch?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

It's defined by path.data property.
It basically depends on how you set it up? Did you use zip or tar.gz or deb/rpm package?

You can change log level to debug to trace this when the node starts.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 juil. 2013 à 18:49, james armstrong james.la3142@gmail.com a écrit :

On Thursday, July 11, 2013 10:34:33 AM UTC-6, james armstrong wrote:

I don't have any client code. The client providing the JSON docs is insulated by REST calls. my process....ES client (Java/remote) uses REST to POST docs to ES repo. I went ahead and ensured that the ES client and ES engine are using the same JVM. My issue is: when I do: kill engine; cd $ES_HOME/elasticsearch/data/elasticsearch/; rm -rf ./nodes; then restart ESEngine...where is ES getting the docs from? is it storing them in a dir other than ./nodes?

I also DELETE index and PUT index mapping and still get the errors in my logs. Why? if JVM version mismatch, where is the mismatch?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the server back up. I get rheems of logging and exceptions. What's happening? I thought when i deleted ./nodes, that deleted all data?

James
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

path.data is commented out in my elasticsearch.yml file. Is there a
default? Is it $ESHOME/data/elasticsearch. if so, I rm -rf this file but I
get rheems of exceptions in log file???? Frustrating, where is that data
coming from.

On Thursday, July 11, 2013 12:24:53 PM UTC-6, David Pilato wrote:

It's defined by path.data property.
It basically depends on how you set it up? Did you use zip or tar.gz or
deb/rpm package?

You can change log level to debug to trace this when the node starts.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 juil. 2013 à 18:49, james armstrong <james....@gmail.com<javascript:>>
a écrit :

On Thursday, July 11, 2013 10:34:33 AM UTC-6, james armstrong wrote:

I don't have any client code. The client providing the JSON docs is
insulated by REST calls. my process....ES client (Java/remote) uses REST to
POST docs to ES repo. I went ahead and ensured that the ES client and ES
engine are using the same JVM. My issue is: when I do: kill engine; cd
$ES_HOME/elasticsearch/data/elasticsearch/; rm -rf ./nodes; then restart
ESEngine...where is ES getting the docs from? is it storing them in a dir
other than ./nodes?

I also DELETE index and PUT index mapping and still get the errors in my
logs. Why? if JVM version mismatch, where is the mismatch?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I used elasticsearch-0.20.6.tar to install.
James

On Thursday, July 11, 2013 12:24:53 PM UTC-6, David Pilato wrote:

It's defined by path.data property.
It basically depends on how you set it up? Did you use zip or tar.gz or
deb/rpm package?

You can change log level to debug to trace this when the node starts.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 juil. 2013 à 18:49, james armstrong <james....@gmail.com<javascript:>>
a écrit :

On Thursday, July 11, 2013 10:34:33 AM UTC-6, james armstrong wrote:

I don't have any client code. The client providing the JSON docs is
insulated by REST calls. my process....ES client (Java/remote) uses REST to
POST docs to ES repo. I went ahead and ensured that the ES client and ES
engine are using the same JVM. My issue is: when I do: kill engine; cd
$ES_HOME/elasticsearch/data/elasticsearch/; rm -rf ./nodes; then restart
ESEngine...where is ES getting the docs from? is it storing them in a dir
other than ./nodes?

I also DELETE index and PUT index mapping and still get the errors in my
logs. Why? if JVM version mismatch, where is the mismatch?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Here is what I get when I put logging.yml in debug:

[2013-07-11 19:05:52,375][INFO ][node ] [Agon]
{0.20.6}[10081]: initializing ...
[2013-07-11 19:05:52,379][INFO ][plugins ] [Agon] loaded
[], sites []
[2013-07-11 19:05:54,412][INFO ][node ] [Agon]
{0.20.6}[10081]: initialized
[2013-07-11 19:05:54,412][INFO ][node ] [Agon]
{0.20.6}[10081]: starting ...
[2013-07-11 19:05:54,525][INFO ][transport ] [Agon]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.5.165.139:9300]}
[2013-07-11 19:05:57,725][INFO ][cluster.service ] [Agon]
detected_master
[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]], added
{[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]],}, reason:
zen-disco-receive(from master
[[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])
[2013-07-11 19:05:57,831][INFO ][discovery ] [Agon]
elasticsearch/NyI01aHQTY6ZKIMKV9vXHQ
[2013-07-11 19:05:57,860][INFO ][http ] [Agon]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/10.5.165.139:9200]}
[2013-07-11 19:05:57,860][INFO ][node ] [Agon]
{0.20.6}[10081]: started
[2013-07-11 19:06:01,482][WARN ][indices.cluster ] [Agon]
[annsae1.html][1] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException:
[annsae1.html][1]: Recovery failed from
[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into
[Agon][NyI01aHQTY6ZKIMKV9vXHQ][inet[/10.5.165.139:9300]]
at
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:293)
at
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:62)
at
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:163)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.elasticsearch.transport.RemoteTransportException:
[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException:
[annsae1.html][1] Phase[2] Execution failed
at
org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1147)
at
org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:526)
at
org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:116)
at
org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:60)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:328)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:314)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:265)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:168)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.han

On Thursday, July 11, 2013 10:23:11 AM UTC-6, David Pilato wrote:

Are you mixing ES versions or JVM versions?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 11 juil. 2013 à 18:22, james armstrong <james....@gmail.com<javascript:>>
a écrit :

How do I NOT have to go through the following if I restart ES:

My log file gets filled with this and it takes ES 15 minutes to wind out
this crap.

[2013-07-11 16:19:26,085][WARN ][indices.cluster ] [Strange,
Victor] [alarms][2] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: [alarms][2]:
Recovery failed from [Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10
.5.40.221:9300]] into [Strange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/
10.5.165.139:9300]]
at
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:293)
at
org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:62)
at
org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:163)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.elasticsearch.transport.RemoteTransportException:
[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException:
[alarms][2] Phase[2] Execution failed
at
org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1147)
at
org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:526)
at
org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:116)
at
org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:60)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:328
)
at
org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:314
)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:265)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.transport.RemoteTransportException: Failed to
deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException:
Failed to deserialize exception response from stream
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:168)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline
.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at
java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:166)
... 23 more
[2013-07-11 16:19:26,085][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][0] state: [CREATED]->[RECOVERING], reason [from [A
ardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]]
[2013-07-11 16:19:26,085][DEBUG][indices.cluster ] [Strange,
Victor] [alarms][2] removing shard (not allocated)
[2013-07-11 16:19:26,086][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][2] state: [RECOVERING]->[CLOSED], reason [removing shard
(not allocated)]
[2013-07-11 16:19:26,086][DEBUG][indices.memory ] [Strange,
Victor] recalculating shard indexing buffer (reason=removed_shard[a
larms][2]), total is [815.4mb] with [2] active shards, each shard set to
[407.7mb]
[2013-07-11 16:19:26,087][DEBUG][index.engine.robin ] [Strange,
Victor] [alarms][0] updating index_buffer_size from [64mb] to [407. 7mb]
[2013-07-11 16:19:26,087][WARN ][cluster.action.shard ] [Strange,
Victor] sending failed shard for [alarms][2], node[LdAs9OtOSQ2IOYD

uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message
[RecoveryFailedException[[alarms][2]: Recovery failed from [Aardwol
f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re
moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[alarms] [2] Phase[2] Execution failed];
nested: RemoteTransportException[Failed to deserialize exception response
from stream]; nested: Transport SerializationException[Failed to
deserialize exception response from stream]; nested:
StreamCorruptedException[unexpected end of block da ta]; ]]
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz
w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: done applying updated
cluster_state
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz
w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: execute
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange,
Victor] cluster state updated, version [1958233], source [zen-disco
-receive(from master
[[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]
[2013-07-11 16:19:26,088][DEBUG][index.shard.service ] [Strange,
Victor] [alarms][1] state: [RECOVERING]->[CLOSED], reason [recovery
failure [RecoveryFailedException[[alarms][1]: Recovery failed from
[Aardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Str
ange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested:
RemoteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][in
dex/shard/recovery/startRecovery]]; nested:
RecoveryEngineException[[alarms][1] Phase[2] Execution failed]; nested:
RemoteTransportExcept ion[Failed to deserialize exception response from
stream]; nested: TransportSerializationException[Failed to deserialize
exception respon se from stream]; nested:
StreamCorruptedException[unexpected end of block data]; ]]
[2013-07-11 16:19:26,088][DEBUG][indices.memory ] [Strange,
Victor] recalculating shard indexing buffer (reason=removed_shard[a
larms][1]), total is [815.4mb] with [1] active shards, each shard set to
[512mb]
[2013-07-11 16:19:26,088][DEBUG][index.engine.robin ] [Strange,
Victor] [alarms][0] updating index_buffer_size from [407.7mb] to [5 12mb]
[2013-07-11 16:19:26,088][WARN ][cluster.action.shard ] [Strange,
Victor] sending failed shard for [alarms][1], node[LdAs9OtOSQ2IOYD

uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message
[RecoveryFailedException[[alarms][1]: Recovery failed from [Aardwol
f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange,
Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re
moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]];
nested: RecoveryEngineException[[alarms] [1] Phase[2] Execution failed];
nested: RemoteTransportException[Failed to deserialize exception response
from stream]; nested: T

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

David,
think I figured out 1 thing. I have a coworker who is using a default
installation on another server. i am seeing his server in my log file. Do I
need to uniquely name my node so that i do not collide with his instance?
have his instance and mine joined?

James

On Thursday, July 11, 2013 12:24:53 PM UTC-6, David Pilato wrote:

It's defined by path.data property.
It basically depends on how you set it up? Did you use zip or tar.gz or
deb/rpm package?

You can change log level to debug to trace this when the node starts.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 juil. 2013 à 18:49, james armstrong <james....@gmail.com<javascript:>>
a écrit :

On Thursday, July 11, 2013 10:34:33 AM UTC-6, james armstrong wrote:

I don't have any client code. The client providing the JSON docs is
insulated by REST calls. my process....ES client (Java/remote) uses REST to
POST docs to ES repo. I went ahead and ensured that the ES client and ES
engine are using the same JVM. My issue is: when I do: kill engine; cd
$ES_HOME/elasticsearch/data/elasticsearch/; rm -rf ./nodes; then restart
ESEngine...where is ES getting the docs from? is it storing them in a dir
other than ./nodes?

I also DELETE index and PUT index mapping and still get the errors in my
logs. Why? if JVM version mismatch, where is the mismatch?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the
server back up. I get rheems of logging and exceptions. What's happening? I
thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You have two nodes I think.
10.5.40.221
10.5.165.139

Clean all nodes.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 juil. 2013 à 21:26, james armstrong james.la3142@gmail.com a écrit :

Here is what I get when I put logging.yml in debug:

[2013-07-11 19:05:52,375][INFO ][node ] [Agon] {0.20.6}[10081]: initializing ...
[2013-07-11 19:05:52,379][INFO ][plugins ] [Agon] loaded [], sites []
[2013-07-11 19:05:54,412][INFO ][node ] [Agon] {0.20.6}[10081]: initialized
[2013-07-11 19:05:54,412][INFO ][node ] [Agon] {0.20.6}[10081]: starting ...
[2013-07-11 19:05:54,525][INFO ][transport ] [Agon] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.5.165.139:9300]}
[2013-07-11 19:05:57,725][INFO ][cluster.service ] [Agon] detected_master [Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]], added {[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]],}, reason: zen-disco-receive(from master [[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])
[2013-07-11 19:05:57,831][INFO ][discovery ] [Agon] elasticsearch/NyI01aHQTY6ZKIMKV9vXHQ
[2013-07-11 19:05:57,860][INFO ][http ] [Agon] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.5.165.139:9200]}
[2013-07-11 19:05:57,860][INFO ][node ] [Agon] {0.20.6}[10081]: started
[2013-07-11 19:06:01,482][WARN ][indices.cluster ] [Agon] [annsae1.html][1] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: [annsae1.html][1]: Recovery failed from [Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Agon][NyI01aHQTY6ZKIMKV9vXHQ][inet[/10.5.165.139:9300]]
at org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:293)
at org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:62)
at org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:163)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.elasticsearch.transport.RemoteTransportException: [Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException: [annsae1.html][1] Phase[2] Execution failed
at org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1147)
at org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:526)
at org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:116)
at org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:60)
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:328)
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:314)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:265)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:168)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.han

On Thursday, July 11, 2013 10:23:11 AM UTC-6, David Pilato wrote:

Are you mixing ES versions or JVM versions?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 11 juil. 2013 à 18:22, james armstrong james....@gmail.com a écrit :

How do I NOT have to go through the following if I restart ES:

My log file gets filled with this and it takes ES 15 minutes to wind out this crap.

[2013-07-11 16:19:26,085][WARN ][indices.cluster ] [Strange, Victor] [alarms][2] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: [alarms][2]: Recovery failed from [Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10 .5.40.221:9300]] into [Strange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]
at org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:293)
at org.elasticsearch.indices.recovery.RecoveryTarget.access$300(RecoveryTarget.java:62)
at org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:163)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.elasticsearch.transport.RemoteTransportException: [Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]
Caused by: org.elasticsearch.index.engine.RecoveryEngineException: [alarms][2] Phase[2] Execution failed
at org.elasticsearch.index.engine.robin.RobinEngine.recover(RobinEngine.java:1147)
at org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:526)
at org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:116)
at org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:60)
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:328 )
at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:314 )
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:265)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:168)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline .java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.StreamCorruptedException: unexpected end of block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1369)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:498)
at java.lang.Throwable.readObject(Throwable.java:913)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:991)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:166)
... 23 more
[2013-07-11 16:19:26,085][DEBUG][index.shard.service ] [Strange, Victor] [alarms][0] state: [CREATED]->[RECOVERING], reason [from [A ardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]]
[2013-07-11 16:19:26,085][DEBUG][indices.cluster ] [Strange, Victor] [alarms][2] removing shard (not allocated)
[2013-07-11 16:19:26,086][DEBUG][index.shard.service ] [Strange, Victor] [alarms][2] state: [RECOVERING]->[CLOSED], reason [removing shard (not allocated)]
[2013-07-11 16:19:26,086][DEBUG][indices.memory ] [Strange, Victor] recalculating shard indexing buffer (reason=removed_shard[a larms][2]), total is [815.4mb] with [2] active shards, each shard set to [407.7mb]
[2013-07-11 16:19:26,087][DEBUG][index.engine.robin ] [Strange, Victor] [alarms][0] updating index_buffer_size from [64mb] to [407. 7mb]
[2013-07-11 16:19:26,087][WARN ][cluster.action.shard ] [Strange, Victor] sending failed shard for [alarms][2], node[LdAs9OtOSQ2IOYD
uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message [RecoveryFailedException[[alarms][2]: Recovery failed from [Aardwol f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]]; nested: RecoveryEngineException[[alarms] [2] Phase[2] Execution failed]; nested: RemoteTransportException[Failed to deserialize exception response from stream]; nested: Transport SerializationException[Failed to deserialize exception response from stream]; nested: StreamCorruptedException[unexpected end of block da ta]; ]]
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange, Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: done applying updated cluster_state
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange, Victor] processing [zen-disco-receive(from master [[Aardwolf][_57uz w33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]: execute
[2013-07-11 16:19:26,087][DEBUG][cluster.service ] [Strange, Victor] cluster state updated, version [1958233], source [zen-disco -receive(from master [[Aardwolf][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]]])]
[2013-07-11 16:19:26,088][DEBUG][index.shard.service ] [Strange, Victor] [alarms][1] state: [RECOVERING]->[CLOSED], reason [recovery failure [RecoveryFailedException[[alarms][1]: Recovery failed from [Aardwolf][57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Str ange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: RemoteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][in dex/shard/recovery/startRecovery]]; nested: RecoveryEngineException[[alarms][1] Phase[2] Execution failed]; nested: RemoteTransportExcept ion[Failed to deserialize exception response from stream]; nested: TransportSerializationException[Failed to deserialize exception respon se from stream]; nested: StreamCorruptedException[unexpected end of block data]; ]]
[2013-07-11 16:19:26,088][DEBUG][indices.memory ] [Strange, Victor] recalculating shard indexing buffer (reason=removed_shard[a larms][1]), total is [815.4mb] with [1] active shards, each shard set to [512mb]
[2013-07-11 16:19:26,088][DEBUG][index.engine.robin ] [Strange, Victor] [alarms][0] updating index_buffer_size from [407.7mb] to [5 12mb]
[2013-07-11 16:19:26,088][WARN ][cluster.action.shard ] [Strange, Victor] sending failed shard for [alarms][1], node[LdAs9OtOSQ2IOYD
uNp4wA], [R], s[INITIALIZING], reason [Failed to start shard, message [RecoveryFailedException[[alarms][1]: Recovery failed from [Aardwol f][_57uzw33SVi_hhWAf9ytNA][inet[/10.5.40.221:9300]] into [Strange, Victor][LdAs9OtOSQ2IOYD_uNp4wA][inet[/10.5.165.139:9300]]]; nested: Re moteTransportException[[Aardwolf][inet[/10.5.40.221:9300]][index/shard/recovery/startRecovery]]; nested: RecoveryEngineException[[alarms] [1] Phase[2] Execution failed]; nested: RemoteTransportException[Failed to deserialize exception response from stream]; nested: T

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the server back up. I get rheems of logging and exceptions. What's happening? I thought when i deleted ./nodes, that deleted all data?

James

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Yeah!

Change at least the cluster name.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 juil. 2013 à 21:49, james armstrong james.la3142@gmail.com a écrit :

David,
think I figured out 1 thing. I have a coworker who is using a default installation on another server. i am seeing his server in my log file. Do I need to uniquely name my node so that i do not collide with his instance? have his instance and mine joined?

James

On Thursday, July 11, 2013 12:24:53 PM UTC-6, David Pilato wrote:

It's defined by path.data property.
It basically depends on how you set it up? Did you use zip or tar.gz or deb/rpm package?

You can change log level to debug to trace this when the node starts.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 juil. 2013 à 18:49, james armstrong james....@gmail.com a écrit :

On Thursday, July 11, 2013 10:34:33 AM UTC-6, james armstrong wrote:

I don't have any client code. The client providing the JSON docs is insulated by REST calls. my process....ES client (Java/remote) uses REST to POST docs to ES repo. I went ahead and ensured that the ES client and ES engine are using the same JVM. My issue is: when I do: kill engine; cd $ES_HOME/elasticsearch/data/elasticsearch/; rm -rf ./nodes; then restart ESEngine...where is ES getting the docs from? is it storing them in a dir other than ./nodes?

I also DELETE index and PUT index mapping and still get the errors in my logs. Why? if JVM version mismatch, where is the mismatch?
James

On Thursday, July 11, 2013 12:46:56 AM UTC-6, james armstrong wrote:

I stop the elasticsearch server, then I delete the nodes, then start the server back up. I get rheems of logging and exceptions. What's happening? I thought when i deleted ./nodes, that deleted all data?

James
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.