Out of memory at startup with large index and parent/child relation

Hi,

I encounter a problem with a large index (38GB) that prevents ES 1.4.2 from
starting.
The problem looks pretty similar to the one in

I tried some of the recommandations from this post (and linked ones) :

index.load_fixed_bitset_filters_eagerly: false
index.warmer.enabled: false
indices.breaker.total.limit: 30%

And event with that, my server does not start [1].

I uploaded to gist the mapping for the index :

I tried several OS memory, ES heap combinations, the biggest being
48GiB for the operating system and 32GiB for ES heap and it still
fails with that.

Any idea or link to an open issue I could follow ?

Regards,
Thomas.

  1. debug output:

[2015-01-14 12:01:55,740][DEBUG][indices.cluster ] [Saint Elmo]
[mailspool][0] creating shard
[2015-01-14 12:01:55,741][DEBUG][index.service ] [Saint Elmo]
[mailspool] creating shard_id [0]
[2015-01-14 12:01:56,041][DEBUG][index.deletionpolicy ] [Saint Elmo]
[mailspool][0] Using [keep_only_last] deletion policy
[2015-01-14 12:01:56,041][DEBUG][index.merge.policy ] [Saint Elmo]
[mailspool][0] using [tiered] merge mergePolicy with
expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_on
ce[10], max_merge_at_once_explicit[30], max_merged_segment[5gb],
segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-01-14 12:01:56,041][DEBUG][index.merge.scheduler ] [Saint Elmo]
[mailspool][0] using [concurrent] merge scheduler with max_thread_count[2],
max_merge_count[4]
[2015-01-14 12:01:56,042][DEBUG][index.shard.service ] [Saint Elmo]
[mailspool][0] state: [CREATED]
[2015-01-14 12:01:56,043][DEBUG][index.translog ] [Saint Elmo]
[mailspool][0] interval [5s], flush_threshold_ops [2147483647],
flush_threshold_size [200mb], flush_threshold_period [3
0m]
[2015-01-14 12:01:56,044][DEBUG][index.shard.service ] [Saint Elmo]
[mailspool][0] state: [CREATED]->[RECOVERING], reason [from gateway]
[2015-01-14 12:01:56,044][DEBUG][index.gateway ] [Saint Elmo]
[mailspool][0] starting recovery from local ...
[2015-01-14 12:01:56,048][DEBUG][river.cluster ] [Saint Elmo]
processing [reroute_rivers_node_changed]: execute
[2015-01-14 12:01:56,048][DEBUG][river.cluster ] [Saint Elmo]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-01-14 12:01:56,048][DEBUG][cluster.service ] [Saint Elmo]
processing [shard-failed ([mailspool][3], node[gOgAuHo4SXyfyuPpws0Usw],
[P], s[INITIALIZING]), reason [engine failure,
message [refresh failed][OutOfMemoryError[Java heap space]]]]: done
applying updated cluster_state (version: 4)
[2015-01-14 12:01:56,062][DEBUG][index.engine.internal ] [Saint Elmo]
[mailspool][0] starting engine
[2015-01-14 12:02:19,701][WARN ][index.engine.internal ] [Saint Elmo]
[mailspool][0] failed engine [refresh failed]
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.FixedBitSet.(FixedBitSet.java:187)
at
org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:104)
at
org.elasticsearch.index.cache.filter.weighted.WeightedFilterCache$FilterCacheFilterWrapper.getDocIdSet(WeightedFilterCache.java:177)
at
org.elasticsearch.common.lucene.search.OrFilter.getDocIdSet(OrFilter.java:55)
at
org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:46)
at
org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:130)
at
org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:542)
at
org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:136)
at
org.apache.lucene.search.QueryWrapperFilter$1.iterator(QueryWrapperFilter.java:59)
at
org.apache.lucene.index.BufferedUpdatesStream.applyQueryDeletes(BufferedUpdatesStream.java:554)
at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:287)
at
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3271)
at
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3262)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:421)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:292)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:267)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:257)
at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:171)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
at
org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:225)
at
org.elasticsearch.index.engine.internal.InternalEngine.refresh(InternalEngine.java:796)
at
org.elasticsearch.index.engine.internal.InternalEngine.delete(InternalEngine.java:692)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:798)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:268)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-01-14 12:02:19,704][DEBUG][index.service ] [Saint Elmo]
[mailspool] [0] closing... (reason: [engine failure, message [refresh
failed][OutOfMemoryError[Java heap space]]])

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7fcc12b7-9024-466c-8a78-7d5678b0d605%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

By removing all my translog files, ES can start without error.

On Wednesday, January 14, 2015 at 2:56:48 PM UTC+1, Thomas Cataldo wrote:

Hi,

I encounter a problem with a large index (38GB) that prevents ES 1.4.2
from starting.
The problem looks pretty similar to the one in
https://github.com/elasticsearch/elasticsearch/issues/8394

I tried some of the recommandations from this post (and linked ones) :

index.load_fixed_bitset_filters_eagerly: false
index.warmer.enabled: false
indices.breaker.total.limit: 30%

And event with that, my server does not start [1].

I uploaded to gist the mapping for the index :
https://gist.github.com/tcataldo/c0b6b3dfec9823bf6523

I tried several OS memory, ES heap combinations, the biggest being
48GiB for the operating system and 32GiB for ES heap and it still
fails with that.

Any idea or link to an open issue I could follow ?

Regards,
Thomas.

  1. debug output:

[2015-01-14 12:01:55,740][DEBUG][indices.cluster ] [Saint Elmo]
[mailspool][0] creating shard
[2015-01-14 12:01:55,741][DEBUG][index.service ] [Saint Elmo]
[mailspool] creating shard_id [0]
[2015-01-14 12:01:56,041][DEBUG][index.deletionpolicy ] [Saint Elmo]
[mailspool][0] Using [keep_only_last] deletion policy
[2015-01-14 12:01:56,041][DEBUG][index.merge.policy ] [Saint Elmo]
[mailspool][0] using [tiered] merge mergePolicy with
expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_on
ce[10], max_merge_at_once_explicit[30], max_merged_segment[5gb],
segments_per_tier[10.0], reclaim_deletes_weight[2.0]
[2015-01-14 12:01:56,041][DEBUG][index.merge.scheduler ] [Saint Elmo]
[mailspool][0] using [concurrent] merge scheduler with max_thread_count[2],
max_merge_count[4]
[2015-01-14 12:01:56,042][DEBUG][index.shard.service ] [Saint Elmo]
[mailspool][0] state: [CREATED]
[2015-01-14 12:01:56,043][DEBUG][index.translog ] [Saint Elmo]
[mailspool][0] interval [5s], flush_threshold_ops [2147483647],
flush_threshold_size [200mb], flush_threshold_period [3
0m]
[2015-01-14 12:01:56,044][DEBUG][index.shard.service ] [Saint Elmo]
[mailspool][0] state: [CREATED]->[RECOVERING], reason [from gateway]
[2015-01-14 12:01:56,044][DEBUG][index.gateway ] [Saint Elmo]
[mailspool][0] starting recovery from local ...
[2015-01-14 12:01:56,048][DEBUG][river.cluster ] [Saint Elmo]
processing [reroute_rivers_node_changed]: execute
[2015-01-14 12:01:56,048][DEBUG][river.cluster ] [Saint Elmo]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2015-01-14 12:01:56,048][DEBUG][cluster.service ] [Saint Elmo]
processing [shard-failed ([mailspool][3], node[gOgAuHo4SXyfyuPpws0Usw],
[P], s[INITIALIZING]), reason [engine failure,
message [refresh failed][OutOfMemoryError[Java heap space]]]]: done
applying updated cluster_state (version: 4)
[2015-01-14 12:01:56,062][DEBUG][index.engine.internal ] [Saint Elmo]
[mailspool][0] starting engine
[2015-01-14 12:02:19,701][WARN ][index.engine.internal ] [Saint Elmo]
[mailspool][0] failed engine [refresh failed]
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.FixedBitSet.(FixedBitSet.java:187)
at
org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:104)
at
org.elasticsearch.index.cache.filter.weighted.WeightedFilterCache$FilterCacheFilterWrapper.getDocIdSet(WeightedFilterCache.java:177)
at
org.elasticsearch.common.lucene.search.OrFilter.getDocIdSet(OrFilter.java:55)
at
org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:46)
at
org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:130)
at
org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:542)
at
org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:136)
at
org.apache.lucene.search.QueryWrapperFilter$1.iterator(QueryWrapperFilter.java:59)
at
org.apache.lucene.index.BufferedUpdatesStream.applyQueryDeletes(BufferedUpdatesStream.java:554)
at
org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:287)
at
org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3271)
at
org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3262)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:421)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:292)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:267)
at
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:257)
at
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:171)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at
org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at
org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
at
org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:225)
at
org.elasticsearch.index.engine.internal.InternalEngine.refresh(InternalEngine.java:796)
at
org.elasticsearch.index.engine.internal.InternalEngine.delete(InternalEngine.java:692)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:798)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:268)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-01-14 12:02:19,704][DEBUG][index.service ] [Saint Elmo]
[mailspool] [0] closing... (reason: [engine failure, message [refresh
failed][OutOfMemoryError[Java heap space]]])

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/91792194-7511-4d16-992b-67b23f6c8bb0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.