EOFException

Hi,

I am getting this error:
[2012-12-11 05:20:26,143][WARN ][index.merge.scheduler ] [db-es1-sl]
[users][1] failed to merge
java.io.EOFException: read past EOF:
NIOFSIndexInput(path="/home/es/data/production/nodes/0/indices/users/1/index/_5nkis.fdt")
at
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:155)
at
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:111)
at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:132)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.copyBytes(Store.java:665)
at
org.apache.lucene.index.FieldsWriter.addRawDocuments(FieldsWriter.java:228)
at
org.apache.lucene.index.SegmentMerger.copyFieldsWithDeletions(SegmentMerger.java:266)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:223)
at
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:107)
at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3908)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)

What does it mean ? As I found it might be connected with OutOfMemory error
that I got yesterday, I restarted then the whole cluster and since then
I've been getting this error. Each error is about index [users][1] and all
are about the same
path: /home/es/data/production/nodes/0/indices/users/1/index/_5nkis.fdt

What should I do to fix this ?

Thank you
Best regards.
Marcin Dojwa.

--

I found that I should use CkeckIndex tu fix the index. I found

And there is something like this there:
"To run the utility, go to the directory where the Lucene library files are
located..."
Where Lucene library files are located if I installed ES using binary
package (tar.gz) from elasticsearch.org website ?

Thanks.
Best regards.
Marcin Dojwa

2012/12/11 Marcin Dojwa m.dojwa@livechatinc.com

Hi,

I am getting this error:
[2012-12-11 05:20:26,143][WARN ][index.merge.scheduler ] [db-es1-sl]
[users][1] failed to merge
java.io.EOFException: read past EOF:
NIOFSIndexInput(path="/home/es/data/production/nodes/0/indices/users/1/index/_5nkis.fdt")
at
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:155)
at
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:111)
at
org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:132)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.copyBytes(Store.java:665)
at
org.apache.lucene.index.FieldsWriter.addRawDocuments(FieldsWriter.java:228)
at
org.apache.lucene.index.SegmentMerger.copyFieldsWithDeletions(SegmentMerger.java:266)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:223)
at
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:107)
at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3908)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)

What does it mean ? As I found it might be connected with OutOfMemory
error that I got yesterday, I restarted then the whole cluster and since
then I've been getting this error. Each error is about index [users][1] and
all are about the same
path: /home/es/data/production/nodes/0/indices/users/1/index/_5nkis.fdt

What should I do to fix this ?

Thank you
Best regards.
Marcin Dojwa.

--

OK, I found it. If anyone wants to know the answer. The lucene library is
/lib/lucence-core-3.6.1.jar (for ES 0.20.1). In my case to fix the
problem I run:
java -cp lucene-core-3.6.1.jar -ea:org.apache.lucene...
org.apache.lucene.index.CheckIndex
/home/es/data/production/nodes/0/indices/users/1/index/ -fix

Problem solved :slight_smile:

2012/12/11 Marcin Dojwa m.dojwa@livechatinc.com

I found that I should use CkeckIndex tu fix the index. I found
http://java.dzone.com/news/lucene-and-solrs-checkindex

And there is something like this there:
"To run the utility, go to the directory where the Lucene library files
are located..."
Where Lucene library files are located if I installed ES using binary
package (tar.gz) from elasticsearch.org website ?

Thanks.
Best regards.
Marcin Dojwa

2012/12/11 Marcin Dojwa m.dojwa@livechatinc.com

Hi,

I am getting this error:
[2012-12-11 05:20:26,143][WARN ][index.merge.scheduler ] [db-es1-sl]
[users][1] failed to merge
java.io.EOFException: read past EOF:
NIOFSIndexInput(path="/home/es/data/production/nodes/0/indices/users/1/index/_5nkis.fdt")
at
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:155)
at
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:111)
at
org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:132)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.copyBytes(Store.java:665)
at
org.apache.lucene.index.FieldsWriter.addRawDocuments(FieldsWriter.java:228)
at
org.apache.lucene.index.SegmentMerger.copyFieldsWithDeletions(SegmentMerger.java:266)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:223)
at
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:107)
at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
at
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3908)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:91)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)

What does it mean ? As I found it might be connected with OutOfMemory
error that I got yesterday, I restarted then the whole cluster and since
then I've been getting this error. Each error is about index [users][1] and
all are about the same
path: /home/es/data/production/nodes/0/indices/users/1/index/_5nkis.fdt

What should I do to fix this ?

Thank you
Best regards.
Marcin Dojwa.

--