How many available disk space need for normal system running?


(Ivan Ji) #1

Hi all,

I am using ES 1.0.1. I am wondering how many un-used disk space needed for
the ES's system running?

Because I ran into the error:

[2014-03-26 03:30:52,713][WARN ][index.merge.scheduler ] [Rick Jones]

[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at
org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:458)
at
org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:61)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:602)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWriter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.close(CompressingStoredFieldsWriter.java:138)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick Jones]
[qusion][1] failed engine

Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the merge
operation.

Does anyone have the idea how much does it can be?

Ivan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/272d3fc6-5dd9-4377-b847-bacbbc800fb1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Ivan Ji) #2

I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.

Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:

Hi all,

I am using ES 1.0.1. I am wondering how many un-used disk space needed for
the ES's system running?

Because I ran into the error:

[2014-03-26 03:30:52,713][WARN ][index.merge.scheduler ] [Rick Jones]

[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at
org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:458)
at
org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(BufferedChecksumIndexOutput.java:71)
at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:113)
at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:102)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.flush(BufferedChecksumIndexOutput.java:86)
at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:126)
at
org.apache.lucene.store.BufferedChecksumIndexOutput.close(BufferedChecksumIndexOutput.java:61)
at
org.elasticsearch.index.store.Store$StoreIndexOutput.close(Store.java:602)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWriter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.close(CompressingStoredFieldsWriter.java:138)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick Jones]
[qusion][1] failed engine

Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.

Does anyone have the idea how much does it can be?

Ivan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Mark Walkom) #3

Depends on how much data you have.

How much disk space is on the machine? How much data is in ES?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 26 March 2014 21:06, Ivan Ji hxuanji@gmail.com wrote:

I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.

Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:

Hi all,

I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?

Because I ran into the error:

[2014-03-26 03:30:52,713][WARN ][index.merge.scheduler ] [Rick Jones]

[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.
flushBuffer(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$
RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(
BufferedIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(
BufferedIndexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(
BufferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(
BufferedIndexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(
BufferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(
Store.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWr
iter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.
close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(
SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(
IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(
ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick Jones]
[qusion][1] failed engine

Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.

Does anyone have the idea how much does it can be?

Ivan

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624b7LoUSnvgZL1mK6MZiHtrfD9wjQdU0OzsaLsXZFCgogA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Ivan Ji) #4

Hi Mark,

My index size is about 300GB, the free disk space is about 5G.

Mark Walkom於 2014年3月26日星期三UTC+8下午6時18分12秒寫道:

Depends on how much data you have.

How much disk space is on the machine? How much data is in ES?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 26 March 2014 21:06, Ivan Ji <hxu...@gmail.com <javascript:>> wrote:

I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.

Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:

Hi all,

I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?

Because I ran into the error:

[2014-03-26 03:30:52,713][WARN ][index.merge.scheduler ] [Rick Jones]

[qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.
flushBuffer(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$
RateLimitedIndexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(
BufferedIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(
BufferedIndexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(
BufferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(
BufferedIndexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(
BufferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(
Store.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWr
iter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.
close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(
SegmentMerger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(
IndexWriter.java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(
ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick
Jones] [qusion][1] failed engine

Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.

Does anyone have the idea how much does it can be?

Ivan

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c55ca5dd-f649-4a2c-ba65-6afb2aab0583%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Mark Walkom) #5

There's not much you can do, you either need to delete some data or
increase your disk space.

Maybe someone can clarify how much space is needed for a merge, but I
imagine it'd be twice the size of a shard.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 26 March 2014 22:01, Ivan Ji hxuanji@gmail.com wrote:

Hi Mark,

My index size is about 300GB, the free disk space is about 5G.

Mark Walkom於 2014年3月26日星期三UTC+8下午6時18分12秒寫道:

Depends on how much data you have.

How much disk space is on the machine? How much data is in ES?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 26 March 2014 21:06, Ivan Ji hxu...@gmail.com wrote:

I just looked up the lucene's document. during the merge, there at least
need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.

Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:

Hi all,

I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?

Because I ran into the error:

[2014-03-26 03:30:52,713][WARN ][index.merge.scheduler ] [Rick

Jones] [qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffe
r(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIn
dexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(Buff
eredIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIn
dexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(Bu
fferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(BufferedIn
dexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(Bu
fferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(S
tore.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexWr
iter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.
close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMer
ger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.
java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(Con
currentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick
Jones] [qusion][1] failed engine

Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.

Does anyone have the idea how much does it can be?

Ivan

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%
40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c55ca5dd-f649-4a2c-ba65-6afb2aab0583%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/c55ca5dd-f649-4a2c-ba65-6afb2aab0583%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624YiasugLk6L9SsMoESOJvfUA3FzGwKJk0mTDe-FRHSy2Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Ivan Brusic) #6

Lucene segments are immutable, so while segments are merged, the originals
remain in place. You can increase the number of segments you have so that
less merging needs to occur. Mike McCandless has lots of good tips about
merges:

--
Ivan

On Wed, Mar 26, 2014 at 4:11 AM, Mark Walkom markw@campaignmonitor.comwrote:

There's not much you can do, you either need to delete some data or
increase your disk space.

Maybe someone can clarify how much space is needed for a merge, but I
imagine it'd be twice the size of a shard.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 26 March 2014 22:01, Ivan Ji hxuanji@gmail.com wrote:

Hi Mark,

My index size is about 300GB, the free disk space is about 5G.

Mark Walkom於 2014年3月26日星期三UTC+8下午6時18分12秒寫道:

Depends on how much data you have.

How much disk space is on the machine? How much data is in ES?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 26 March 2014 21:06, Ivan Ji hxu...@gmail.com wrote:

I just looked up the lucene's document. during the merge, there at
least need double the index size.
But does ES tune something on it ? or it just follow the lucene's rule,
that is 2x the index size.

Ivan Ji於 2014年3月26日星期三UTC+8下午5時53分33秒寫道:

Hi all,

I am using ES 1.0.1. I am wondering how many un-used disk space needed
for the ES's system running?

Because I ran into the error:

[2014-03-26 03:30:52,713][WARN ][index.merge.scheduler ] [Rick

Jones] [qusion][1] failed to merge
java.io.IOException: No space left on device
at java.io.RandomAccessFile.writeBytes0(Native Method)
at java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:520)
at java.io.RandomAccessFile.write(RandomAccessFile.java:550)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffe
r(FSDirectory.java:458)
at org.apache.lucene.store.RateLimitedFSDirectory$RateLimitedIn
dexOutput.flushBuffer(RateLimitedFSDirectory.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flushBuffer(
BufferedChecksumIndexOutput.java:71)
at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(Buff
eredIndexOutput.java:113)
at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIn
dexOutput.java:102)
at org.apache.lucene.store.BufferedChecksumIndexOutput.flush(Bu
fferedChecksumIndexOutput.java:86)
at org.apache.lucene.store.BufferedIndexOutput.close(BufferedIn
dexOutput.java:126)
at org.apache.lucene.store.BufferedChecksumIndexOutput.close(Bu
fferedChecksumIndexOutput.java:61)
at org.elasticsearch.index.store.Store$StoreIndexOutput.close(S
tore.java:602)
at org.apache.lucene.codecs.compressing.CompressingStoredFields
IndexWriter.close(CompressingStoredFieldsIndexWriter.java:205)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:140)
at org.apache.lucene.codecs.compressing.CompressingStoredFields
Writer.close(CompressingStoredFieldsWriter.java:138)
at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMer
ger.java:318)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.
java:4071)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3668)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(Con
currentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(
TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(
ConcurrentMergeScheduler.java:482)
[2014-03-26 03:30:53,382][WARN ][index.engine.internal ] [Rick
Jones] [qusion][1] failed engine

Obviously, there need to have some amount of disk during the merge.
And I think the larger index size, the more disk space needed for the
merge operation.

Does anyone have the idea how much does it can be?

Ivan

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%
40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/448ff632-0462-436f-b0cf-5a943b8db50f%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c55ca5dd-f649-4a2c-ba65-6afb2aab0583%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/c55ca5dd-f649-4a2c-ba65-6afb2aab0583%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624YiasugLk6L9SsMoESOJvfUA3FzGwKJk0mTDe-FRHSy2Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAEM624YiasugLk6L9SsMoESOJvfUA3FzGwKJk0mTDe-FRHSy2Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQB1v_HoDTjjOEc6tELY%3DX2WLjupDfHyOz3PYLuRT5d44g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Jörg Prante) #7

You should have 300G free if your index is 300G, for copy over in the
process of creating new segments.

Jörg

On Wed, Mar 26, 2014 at 12:01 PM, Ivan Ji hxuanji@gmail.com wrote:

Hi Mark,

My index size is about 300GB, the free disk space is about 5G.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHt6WLGNN0O7nDJhxu0xxPoZwsmkFy8swyCNDQukGEj6w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(system) #8