[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at
org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:458)
at
org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:61)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:602)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWriter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.close(CompressingStoredFieldsWriter.java:138)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick Jones]
[qusion][1] failed engine
Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the merge
operation.
Does anyone have the idea how much does it can be?
I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.
Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:
Hi all,
I am using ES 1.0.1. I am wondering how many un-used disk space needed for
the ES's system running?
[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at
org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:458)
at
org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:61)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:602)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWriter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.close(CompressingStoredFieldsWriter.java:138)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick Jones]
[qusion][1] failed engine
Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.
Does anyone have the idea how much does it can be?
I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.
Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:
Hi all,
I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?
[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.
flushBuffer(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$
RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(
BufferedIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(
BufferedIndexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(
BufferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(
BufferedIndexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(
BufferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(
Store.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWr
iter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.
close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(
SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(
IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(
ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick Jones]
[qusion][1] failed engine
Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.
Does anyone have the idea how much does it can be?
On 26 March 2014 21:06, Ivan Ji <hxu...@gmail.com <javascript:>> wrote:
I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.
Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:
Hi all,
I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?
[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.
flushBuffer(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$
RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(
BufferedIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(
BufferedIndexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(
BufferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(
BufferedIndexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(
BufferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(
Store.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWr
iter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.
close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(
SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(
IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(
ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick
Jones] [qusion][1] failed engine
Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.
Does anyone have the idea how much does it can be?
I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.
Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:
Hi all,
I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?
Jones] [qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffe
r(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIn
dexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(Buff
eredIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIn
dexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(Bu
fferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(BufferedIn
dexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(Bu
fferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(S
tore.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWr
iter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.
close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMer
ger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.
java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(Con
currentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick
Jones] [qusion][1] failed engine
Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.
Does anyone have the idea how much does it can be?
Lucene segments are immutable, so while segments are merged, the originals
remain in place. You can increase the number of segments you have so that
less merging needs to occur. Mike McCandless has lots of good tips about
merges:
I just looked up the lucene's document. during the merge, there at
least need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.
Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:
Hi all,
I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?
Jones] [qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffe
r(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIn
dexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(Buff
eredIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIn
dexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(Bu
fferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(BufferedIn
dexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(Bu
fferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(S
tore.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFields
IndexWriter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFields
Writer.close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMer
ger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.
java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(Con
currentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick
Jones] [qusion][1] failed engine
Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.
Does anyone have the idea how much does it can be?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.