- I am using ES version 2.3.2 on 3 node cluster [ 12 core / 16 GB RAM per node]
- I am writing per day index size with 100 GB (12 shards per index) daily.
- After one day (reached 26 shards), I am getting following errors in ES logs.
- One of the shard (out of 26) "state": "INITIALIZING"
How to overcome this issue? Its happening frequently. I have increased ulimit -m and ulimit -v "unlimited". and I have increased vm.max_map_count=262144 from default value.
[2016-10-01 14:12:55,269][WARN ][indices.cluster ] [hostname10] [[xxxx.yyyy.zzzz_20161001][7]] marking and sending shard failed due to [failed recovery]
[xxxx.yyyy.zzzz_20161001][[xxxx.yyyy.zzzz_20161001][7]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to open reader on writer]; nested: IOException[Map failed: MMapIndexInput(path="/hdfs1/PERF-ATLAS/nodes/0/indices/xxxx.yyyy.zzzz_20161001/7/index/_2p3_Lucene54_0.dvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 65422441 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: [xxxx.yyyy.zzzz_20161001][[xxxx.yyyy.zzzz_20161001][7]] EngineCreationFailureException[failed to open reader on writer]; nested: IOException[Map failed: MMapIndexInput(path="/hdfs1/PERF-ATLAS/nodes/0/indices/xxxx.yyyy.zzzz_20161001/7/index/_2p3_Lucene54_0.dvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 65422441 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]];
at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:295)
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:166)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1515)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1499)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:972)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:944)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
... 5 more
Caused by: java.io.IOException: Map failed: MMapIndexInput(path="/hdfs1/PERF-ATLAS/nodes/0/indices/xxxx.yyyy.zzzz_20161001/7/index/_2p3_Lucene54_0.dvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 65422441 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:273)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:247)
at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesProducer.(Lucene54DocValuesProducer.java:132)
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesFormat.fieldsProducer(Lucene54DocValuesFormat.java:113)
at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.(PerFieldDocValuesFormat.java:268)