Hello,
i am running Elasticsearch on a Cent-OS and got the following error
after running elastisearch for a couple of days. At the moment I have
one single Instance of Elasticsearch running. How can I avoid to have
too many files open ? Do i have a wrong setup here ?
failed recovery]; nested:
EngineCreationFailureException[[4e47da12d50c1f1bceeeb795][1] Failed to
open reader on writer]; nested: FileNotFoundException[/var/lib/
elasticsearch/test/nodes/0/indices/4e47da12d50c1f1bceeeb795/1/index/
segments_1 (Too many open files)]; ]]
[2011-08-15 06:02:01,740][WARN ][indices.cluster ] [Flygirl]
[4e47da12d50c1f1bceeeb795][0] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[4e47da12d50c1f1bceeeb795][0] failed recovery
at org.elasticsearch.index.gateway.IndexShardGatewayService
$1.run(IndexShardGatewayService.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by:
org.elasticsearch.index.engine.EngineCreationFailureException:
[4e47da12d50c1f1bceeeb795][0] Failed to create engine
at
org.elasticsearch.index.engine.robin.RobinEngine.start(RobinEngine.java:
251)
at
org.elasticsearch.index.shard.service.InternalIndexShard.start(InternalIndexShard.java:
254)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:
146)
at org.elasticsearch.index.gateway.IndexShardGatewayService
$1.run(IndexShardGatewayService.java:179)
... 3 more
Caused by: java.io.FileNotFoundException: /var/lib/elasticsearch/test/
nodes/0/indices/4e47da12d50c1f1bceeeb795/0/index/segments_1 (Too many
open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:233)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput
$Descriptor.(SimpleFSDirectory.java:69)
at org.apache.lucene.store.SimpleFSDirectory
$SimpleFSIndexInput.(SimpleFSDirectory.java:90)
at org.apache.lucene.store.NIOFSDirectory
$NIOFSIndexInput.(NIOFSDirectory.java:91)
at
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:
78)
at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:
345)
at org.elasticsearch.index.store.support.AbstractStore
$StoreDirectory.openInput(AbstractStore.java:356)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:262)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:
359)
at org.apache.lucene.index.SegmentInfos
$FindSegmentsFile.run(SegmentInfos.java:750)
at org.apache.lucene.index.SegmentInfos
$FindSegmentsFile.run(SegmentInfos.java:589)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:355)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:1144)
at
org.elasticsearch.index.engine.robin.RobinEngine.createWriter(RobinEngine.java:
1242)
at
org.elasticsearch.index.engine.robin.RobinEngine.start(RobinEngine.java:
249)
... 6 more
My configuration is as followed:
# Cluster Settings
cluster:
name: search
# Server Address
#network :
# host : 10.0.0.4
# Paths
#path:
# logs: /var/log/elasticsearch
# data: /var/data/elasticsearch
# Gateway Settings
#gateway:
# recover_after_nodes: 1
# recover_after_time: 5m
# expected_nodes: 2
# Index Settings
index :
number_of_shards : 2
number_of_replicas : 1
analysis :
analyzer :
filename :
tokenizer : letter
filter : [standard, lowercase, autocomplete]
default :
tokenizer : standard
filter : [standard, lowercase, autocomplete]
filter :
autocomplete :
type : edgeNGram
min_gram : 3
max_gram : 15
side : front
I also started elasticsearch from my init.d script with ulimit -n
20000 to limit it to have max. 20000 files open.
start() {
echo -n $"Starting ${NAME}: "
ulimit -n 20000
}
Many Thanks,
Michael