Searching gives java.io.IOException: read past EOF

Hi,
when searching i am getting the following exception. Does this mean
that the index is corrupted? I am using
1 replica, 1 shards and elasticsearch 0.17.10. I tried to optimize the
index, as well and closing and reopening it,
but the error message still stays the same.

Caused by: org.elasticsearch.transport.RemoteTransportException:
[Bizarnage][inet[/192.168.6.5:9300]][indices/search]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException:
Failed to execute phase [query], total failure; shardFailures
{[Jbm5-u4dT1a_3ldnXYy8uA][search][0]:
RemoteTransportException[[Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query]]; nested:
QueryPhaseExecutionException[[search][0]:
query[ConstantScore(:)],from[0],size[10]: Query Failed [Failed to
execute main query]]; nested: IOException[read past EOF]; }
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:258)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:211)
at org.elasticsearch.search.action.SearchServiceTransportAction$2.handleException(SearchServiceTransportAction.java:151)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:158)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:149)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:302)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:216)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:274)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:261)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

On the server I have a river running that does some searches and I
found the following exception:

[2012-01-10 10:03:15,482][DEBUG][action.search.type ] [Kaur,
Benazir] [search][0], node[Jbm5-u4dT1a_3ldnXYy8uA], [P], s[STARTED]:
Failed to execute
[org.elasticsearch.action.search.SearchRequest@5189fe95]
org.elasticsearch.transport.RemoteTransportException: [Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query+fetch]
Caused by: org.elasticsearch.search.fetch.FetchPhaseExecutionException:
[search][0]: query[ConstantScore(org.elasticsearch.index.search.UidFilter@cfc377d9)],from[0],size[1000]:
Fetch Failed [Failed to fetch doc id [22389]]
at org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:189)
at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:89)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:297)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:501)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:492)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:238)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:207)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
at org.apache.lucene.store.DataInput.readVInt(DataInput.java:105)
at org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:181)
at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:235)
at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:475)
at org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)
at org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:248)
at org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:187)
... 8 more

Best regards,
Michel

Hey, yea, it seems like its corrupted. Something specific happened in the
cluster? What does your river do?

On Tue, Jan 10, 2012 at 11:44 AM, Michel Conrad <
michel.conrad@trendiction.com> wrote:

Hi,
when searching i am getting the following exception. Does this mean
that the index is corrupted? I am using
1 replica, 1 shards and elasticsearch 0.17.10. I tried to optimize the
index, as well and closing and reopening it,
but the error message still stays the same.

Caused by: org.elasticsearch.transport.RemoteTransportException:
[Bizarnage][inet[/192.168.6.5:9300]][indices/search]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException:
Failed to execute phase [query], total failure; shardFailures
{[Jbm5-u4dT1a_3ldnXYy8uA][search][0]:
RemoteTransportException[[Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query]]; nested:
QueryPhaseExecutionException[[search][0]:
query[ConstantScore(:)],from[0],size[10]: Query Failed [Failed to
execute main query]]; nested: IOException[read past EOF]; }
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:258)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:211)
at
org.elasticsearch.search.action.SearchServiceTransportAction$2.handleException(SearchServiceTransportAction.java:151)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:158)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:149)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:302)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:216)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:274)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

On the server I have a river running that does some searches and I
found the following exception:

[2012-01-10 10:03:15,482][DEBUG][action.search.type ] [Kaur,
Benazir] [search][0], node[Jbm5-u4dT1a_3ldnXYy8uA], [P], s[STARTED]:
Failed to execute
[org.elasticsearch.action.search.SearchRequest@5189fe95]
org.elasticsearch.transport.RemoteTransportException: [Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query+fetch]
Caused by: org.elasticsearch.search.fetch.FetchPhaseExecutionException:
[search][0]:
query[ConstantScore(org.elasticsearch.index.search.UidFilter@cfc377d9
)],from[0],size[1000]:
Fetch Failed [Failed to fetch doc id [22389]]
at
org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:189)
at
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:89)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:297)
at
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:501)
at
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:492)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:238)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: read past EOF
at
org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:207)
at
org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
at org.apache.lucene.store.DataInput.readVInt(DataInput.java:105)
at
org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:181)
at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:235)
at
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:475)
at
org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)
at
org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:248)
at
org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:187)
... 8 more

Best regards,
Michel

Not that I can tell, the cluster has been running fine lately. As
there are 40 servers, it could be hardware related.
Shouldn't the replica be able to fix corrupted indices, or am I
getting something wrong?

Best,
Michel

On Wed, Jan 11, 2012 at 5:13 PM, Shay Banon kimchy@gmail.com wrote:

Hey, yea, it seems like its corrupted. Something specific happened in the
cluster? What does your river do?

On Tue, Jan 10, 2012 at 11:44 AM, Michel Conrad
michel.conrad@trendiction.com wrote:

Hi,
when searching i am getting the following exception. Does this mean
that the index is corrupted? I am using
1 replica, 1 shards and elasticsearch 0.17.10. I tried to optimize the
index, as well and closing and reopening it,
but the error message still stays the same.

Caused by: org.elasticsearch.transport.RemoteTransportException:
[Bizarnage][inet[/192.168.6.5:9300]][indices/search]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException:
Failed to execute phase [query], total failure; shardFailures
{[Jbm5-u4dT1a_3ldnXYy8uA][search][0]:
RemoteTransportException[[Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query]]; nested:
QueryPhaseExecutionException[[search][0]:
query[ConstantScore(:)],from[0],size[10]: Query Failed [Failed to
execute main query]]; nested: IOException[read past EOF]; }
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:258)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:211)
at
org.elasticsearch.search.action.SearchServiceTransportAction$2.handleException(SearchServiceTransportAction.java:151)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:158)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:149)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:302)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:216)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:274)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

On the server I have a river running that does some searches and I
found the following exception:

[2012-01-10 10:03:15,482][DEBUG][action.search.type ] [Kaur,
Benazir] [search][0], node[Jbm5-u4dT1a_3ldnXYy8uA], [P], s[STARTED]:
Failed to execute
[org.elasticsearch.action.search.SearchRequest@5189fe95]
org.elasticsearch.transport.RemoteTransportException: [Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query+fetch]
Caused by: org.elasticsearch.search.fetch.FetchPhaseExecutionException:
[search][0]:
query[ConstantScore(org.elasticsearch.index.search.UidFilter@cfc377d9)],from[0],size[1000]:
Fetch Failed [Failed to fetch doc id [22389]]
at
org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:189)
at
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:89)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:297)
at
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:501)
at
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:492)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:238)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: read past EOF
at
org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:207)
at
org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
at org.apache.lucene.store.DataInput.readVInt(DataInput.java:105)
at
org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:181)
at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:235)
at
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:475)
at
org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)
at
org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:248)
at
org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:187)
... 8 more

Best regards,
Michel

Yes, the replica is a replica, until it needs to move around, and then it
recovers its data from the primary on the new node it moves to. Or, if you
close the index and open it they will resync.

On Wed, Jan 11, 2012 at 9:00 PM, Michel Conrad <
michel.conrad@trendiction.com> wrote:

Not that I can tell, the cluster has been running fine lately. As
there are 40 servers, it could be hardware related.
Shouldn't the replica be able to fix corrupted indices, or am I
getting something wrong?

Best,
Michel

On Wed, Jan 11, 2012 at 5:13 PM, Shay Banon kimchy@gmail.com wrote:

Hey, yea, it seems like its corrupted. Something specific happened in the
cluster? What does your river do?

On Tue, Jan 10, 2012 at 11:44 AM, Michel Conrad
michel.conrad@trendiction.com wrote:

Hi,
when searching i am getting the following exception. Does this mean
that the index is corrupted? I am using
1 replica, 1 shards and elasticsearch 0.17.10. I tried to optimize the
index, as well and closing and reopening it,
but the error message still stays the same.

Caused by: org.elasticsearch.transport.RemoteTransportException:
[Bizarnage][inet[/192.168.6.5:9300]][indices/search]
Caused by:
org.elasticsearch.action.search.SearchPhaseExecutionException:
Failed to execute phase [query], total failure; shardFailures
{[Jbm5-u4dT1a_3ldnXYy8uA][search][0]:
RemoteTransportException[[Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query]]; nested:
QueryPhaseExecutionException[[search][0]:
query[ConstantScore(:)],from[0],size[10]: Query Failed [Failed to
execute main query]]; nested: IOException[read past EOF]; }
at

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:258)

   at

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:211)

   at

org.elasticsearch.search.action.SearchServiceTransportAction$2.handleException(SearchServiceTransportAction.java:151)

   at

org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:158)

   at

org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:149)

   at

org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)

   at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)

   at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:302)

   at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317)

   at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299)

   at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:216)

   at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

   at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:274)

   at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:261)

   at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)

   at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)

   at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)

   at

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

   at

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

   at java.lang.Thread.run(Thread.java:662)

On the server I have a river running that does some searches and I
found the following exception:

[2012-01-10 10:03:15,482][DEBUG][action.search.type ] [Kaur,
Benazir] [search][0], node[Jbm5-u4dT1a_3ldnXYy8uA], [P], s[STARTED]:
Failed to execute
[org.elasticsearch.action.search.SearchRequest@5189fe95]
org.elasticsearch.transport.RemoteTransportException: [Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query+fetch]
Caused by: org.elasticsearch.search.fetch.FetchPhaseExecutionException:
[search][0]:
query[ConstantScore(org.elasticsearch.index.search.UidFilter@cfc377d9
)],from[0],size[1000]:
Fetch Failed [Failed to fetch doc id [22389]]
at

org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:189)

   at

org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:89)
at

org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:297)

   at

org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:501)

   at

org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:492)

   at

org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:238)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

   at java.lang.Thread.run(Thread.java:662)

Caused by: java.io.IOException: read past EOF
at

org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:207)

   at

org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)

   at org.apache.lucene.store.DataInput.readVInt(DataInput.java:105)
   at

org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:181)

   at

org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:235)

   at

org.apache.lucene.index.SegmentReader.document(SegmentReader.java:475)
at

org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)

   at

org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:248)
at

org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:187)

   ... 8 more

Best regards,
Michel

Hi opening and closing the index did not solve the issue. What I want
to try is to close the index, use lucene chechindex on it and then
reopen it. Will it be enough to run checkindex on the primary shard,
will it sync to the other, is it always the primary shard that is
copied to the
replica in case there are differences?

Another question is how is it handled in elasticsearch, if I close the
index, copy a backup over the indices files and reopen the index,
will it recover from the older files, or is there some metadata
associated with the shard that has to be consistent with the files in
the data directory?

The question is if I replace the contents of the
/data/search/nodes/0/indices/searchindex/0/index directory, will be es
be in a stable state, and what
happens with the translog directory, do I have to clear it?

Best Regards,
Michel

On Wed, Jan 11, 2012 at 8:28 PM, Shay Banon kimchy@gmail.com wrote:

Yes, the replica is a replica, until it needs to move around, and then it
recovers its data from the primary on the new node it moves to. Or, if you
close the index and open it they will resync.

On Wed, Jan 11, 2012 at 9:00 PM, Michel Conrad
michel.conrad@trendiction.com wrote:

Not that I can tell, the cluster has been running fine lately. As
there are 40 servers, it could be hardware related.
Shouldn't the replica be able to fix corrupted indices, or am I
getting something wrong?

Best,
Michel

On Wed, Jan 11, 2012 at 5:13 PM, Shay Banon kimchy@gmail.com wrote:

Hey, yea, it seems like its corrupted. Something specific happened in
the
cluster? What does your river do?

On Tue, Jan 10, 2012 at 11:44 AM, Michel Conrad
michel.conrad@trendiction.com wrote:

Hi,
when searching i am getting the following exception. Does this mean
that the index is corrupted? I am using
1 replica, 1 shards and elasticsearch 0.17.10. I tried to optimize the
index, as well and closing and reopening it,
but the error message still stays the same.

Caused by: org.elasticsearch.transport.RemoteTransportException:
[Bizarnage][inet[/192.168.6.5:9300]][indices/search]
Caused by:
org.elasticsearch.action.search.SearchPhaseExecutionException:
Failed to execute phase [query], total failure; shardFailures
{[Jbm5-u4dT1a_3ldnXYy8uA][search][0]:
RemoteTransportException[[Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query]]; nested:
QueryPhaseExecutionException[[search][0]:
query[ConstantScore(:)],from[0],size[10]: Query Failed [Failed to
execute main query]]; nested: IOException[read past EOF]; }
at

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:258)
at

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:211)
at

org.elasticsearch.search.action.SearchServiceTransportAction$2.handleException(SearchServiceTransportAction.java:151)
at

org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:158)
at

org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:149)
at

org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)
at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:302)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299)
at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:216)
at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:274)
at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:261)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
at

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)
at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

On the server I have a river running that does some searches and I
found the following exception:

[2012-01-10 10:03:15,482][DEBUG][action.search.type ] [Kaur,
Benazir] [search][0], node[Jbm5-u4dT1a_3ldnXYy8uA], [P], s[STARTED]:
Failed to execute
[org.elasticsearch.action.search.SearchRequest@5189fe95]
org.elasticsearch.transport.RemoteTransportException: [Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query+fetch]
Caused by: org.elasticsearch.search.fetch.FetchPhaseExecutionException:
[search][0]:

query[ConstantScore(org.elasticsearch.index.search.UidFilter@cfc377d9)],from[0],size[1000]:
Fetch Failed [Failed to fetch doc id [22389]]
at

org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:189)
at
org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:89)
at

org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:297)
at

org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:501)
at

org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:492)
at

org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:238)
at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: read past EOF
at

org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:207)
at

org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
at
org.apache.lucene.store.DataInput.readVInt(DataInput.java:105)
at

org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:181)
at
org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:235)
at
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:475)
at

org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)
at
org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:248)
at

org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:187)
... 8 more

Best regards,
Michel

Hey, if you want to run checkindex, then yea, pick the primary shard and
run it there. If you want to replace that shard content, then the simplest
is to replace it on all shard locations. Another option is to replace it on
only one, but, delete the others. There is still metadata around that will
state that there is a shard on a node you deleted (I hope to make it better
in 0.19 to remove that info as well more easily), but if elasticsearch will
decide to allocate the shard there, it will fail because it will expect an
index there, and it will try another one. Note, this logic is much better
in 0.18, and you are using 0.17, so I suggest copying things over to all
shards.

On Thu, Jan 12, 2012 at 11:51 AM, Michel Conrad <
michel.conrad@trendiction.com> wrote:

Hi opening and closing the index did not solve the issue. What I want
to try is to close the index, use lucene chechindex on it and then
reopen it. Will it be enough to run checkindex on the primary shard,
will it sync to the other, is it always the primary shard that is
copied to the
replica in case there are differences?

Another question is how is it handled in elasticsearch, if I close the
index, copy a backup over the indices files and reopen the index,
will it recover from the older files, or is there some metadata
associated with the shard that has to be consistent with the files in
the data directory?

The question is if I replace the contents of the
/data/search/nodes/0/indices/searchindex/0/index directory, will be es
be in a stable state, and what
happens with the translog directory, do I have to clear it?

Best Regards,
Michel

On Wed, Jan 11, 2012 at 8:28 PM, Shay Banon kimchy@gmail.com wrote:

Yes, the replica is a replica, until it needs to move around, and then it
recovers its data from the primary on the new node it moves to. Or, if
you
close the index and open it they will resync.

On Wed, Jan 11, 2012 at 9:00 PM, Michel Conrad
michel.conrad@trendiction.com wrote:

Not that I can tell, the cluster has been running fine lately. As
there are 40 servers, it could be hardware related.
Shouldn't the replica be able to fix corrupted indices, or am I
getting something wrong?

Best,
Michel

On Wed, Jan 11, 2012 at 5:13 PM, Shay Banon kimchy@gmail.com wrote:

Hey, yea, it seems like its corrupted. Something specific happened in
the
cluster? What does your river do?

On Tue, Jan 10, 2012 at 11:44 AM, Michel Conrad
michel.conrad@trendiction.com wrote:

Hi,
when searching i am getting the following exception. Does this mean
that the index is corrupted? I am using
1 replica, 1 shards and elasticsearch 0.17.10. I tried to optimize
the
index, as well and closing and reopening it,
but the error message still stays the same.

Caused by: org.elasticsearch.transport.RemoteTransportException:
[Bizarnage][inet[/192.168.6.5:9300]][indices/search]
Caused by:
org.elasticsearch.action.search.SearchPhaseExecutionException:
Failed to execute phase [query], total failure; shardFailures
{[Jbm5-u4dT1a_3ldnXYy8uA][search][0]:
RemoteTransportException[[Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query]]; nested:
QueryPhaseExecutionException[[search][0]:
query[ConstantScore(:)],from[0],size[10]: Query Failed [Failed to
execute main query]]; nested: IOException[read past EOF]; }
at

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:258)

   at

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:211)

   at

org.elasticsearch.search.action.SearchServiceTransportAction$2.handleException(SearchServiceTransportAction.java:151)

   at

org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:158)

   at

org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:149)

   at

org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:101)

   at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)

   at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:302)

   at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317)

   at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299)

   at

org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:216)

   at

org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

   at

org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

   at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:274)

   at

org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:261)

   at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)

   at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)

   at

org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)

   at

org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

   at

org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

   at java.lang.Thread.run(Thread.java:662)

On the server I have a river running that does some searches and I
found the following exception:

[2012-01-10 10:03:15,482][DEBUG][action.search.type ] [Kaur,
Benazir] [search][0], node[Jbm5-u4dT1a_3ldnXYy8uA], [P], s[STARTED]:
Failed to execute
[org.elasticsearch.action.search.SearchRequest@5189fe95]
org.elasticsearch.transport.RemoteTransportException: [Scourge of the
Underworld][inet[/192.168.6.5:9300]][search/phase/query+fetch]
Caused by:
org.elasticsearch.search.fetch.FetchPhaseExecutionException:
[search][0]:

query[ConstantScore(org.elasticsearch.index.search.UidFilter@cfc377d9
)],from[0],size[1000]:
Fetch Failed [Failed to fetch doc id [22389]]
at

org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:189)

   at

org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:89)
at

org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:297)

   at

org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:501)

   at

org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryFetchTransportHandler.messageReceived(SearchServiceTransportAction.java:492)

   at

org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:238)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

   at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

   at java.lang.Thread.run(Thread.java:662)

Caused by: java.io.IOException: read past EOF
at

org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:207)

   at

org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)

   at

org.apache.lucene.store.DataInput.readVInt(DataInput.java:105)
at

org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:181)

   at

org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:235)
at

org.apache.lucene.index.SegmentReader.document(SegmentReader.java:475)

   at

org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564)

   at

org.apache.lucene.search.IndexSearcher.doc(IndexSearcher.java:248)
at

org.elasticsearch.search.fetch.FetchPhase.loadDocument(FetchPhase.java:187)

   ... 8 more

Best regards,
Michel