Shard allocation failed with MMap warnings


(Shammi Kumar) #1
  • I am using ES version 2.3.2 on 3 node cluster [ 12 core / 16 GB RAM per node]
  • I am writing per day index size with 100 GB (12 shards per index) daily.
  • After one day (reached 26 shards), I am getting following errors in ES logs.
  • One of the shard (out of 26) "state": "INITIALIZING"

How to overcome this issue? Its happening frequently. I have increased ulimit -m and ulimit -v "unlimited". and I have increased vm.max_map_count=262144 from default value.

[2016-10-01 14:12:55,269][WARN ][indices.cluster ] [hostname10] [[xxxx.yyyy.zzzz_20161001][7]] marking and sending shard failed due to [failed recovery]
[xxxx.yyyy.zzzz_20161001][[xxxx.yyyy.zzzz_20161001][7]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to open reader on writer]; nested: IOException[Map failed: MMapIndexInput(path="/hdfs1/PERF-ATLAS/nodes/0/indices/xxxx.yyyy.zzzz_20161001/7/index/_2p3_Lucene54_0.dvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 65422441 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: [xxxx.yyyy.zzzz_20161001][[xxxx.yyyy.zzzz_20161001][7]] EngineCreationFailureException[failed to open reader on writer]; nested: IOException[Map failed: MMapIndexInput(path="/hdfs1/PERF-ATLAS/nodes/0/indices/xxxx.yyyy.zzzz_20161001/7/index/_2p3_Lucene54_0.dvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 65422441 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]];
at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:295)
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:166)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1515)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1499)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:972)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:944)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
... 5 more
Caused by: java.io.IOException: Map failed: MMapIndexInput(path="/hdfs1/PERF-ATLAS/nodes/0/indices/xxxx.yyyy.zzzz_20161001/7/index/_2p3_Lucene54_0.dvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 65422441 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:273)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:247)
at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesProducer.(Lucene54DocValuesProducer.java:132)
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesFormat.fieldsProducer(Lucene54DocValuesFormat.java:113)
at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.(PerFieldDocValuesFormat.java:268)


(Shammi Kumar) #2

Caused by: java.io.IOException: Map failed: MMapIndexInput(path="/hdfs1/PERF-ATLAS/nodes/0/indices/security.events.normalized_20161001/7/index/_2p3_Lucene54_0.dvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 65422441 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:273)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:247)
at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesProducer.(Lucene54DocValuesProducer.java:132)
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesFormat.fieldsProducer(Lucene54DocValuesFormat.java:113)
at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.(PerFieldDocValuesFormat.java:268)
at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat.fieldsProducer(PerFieldDocValuesFormat.java:358)
at org.apache.lucene.index.SegmentDocValues.newDocValuesProducer(SegmentDocValues.java:51)
at org.apache.lucene.index.SegmentDocValues.getDocValuesProducer(SegmentDocValues.java:67)
at org.apache.lucene.index.SegmentReader.initDocValuesProducer(SegmentReader.java:147)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:81)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:435)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:100)
at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:283)


(Mark Walkom) #3

That's way too many, you need 3 shards with 1 replica set.


(Nik Everett) #4

Is this being written to HDFS?


(Shammi Kumar) #5

Yes, I am writing to HDFS and to ES in parallel.


(Shammi Kumar) #6

Thank you. will try that option


(system) #7