Hi guys,
I have a little problem with elasticsearch..
Despite the fact that I set the number of open files on the system as
described on the site :
http://www.elasticsearch.org/tutorials/too-many-open-files/ , I always get
the error message (""too many open files"") from elasticsearch.
Here is the configuration :
cluster.name: xxxxxxx
index.cache.field.type: soft
index.cache.field.max_size: 10000
indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 20mb
indices.memory.index_buffer_size: 10%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 20000
index.store.compress.stored: true
threadpool.search.type: fixed
threadpool.search.size: 4
threadpool.search.queue_size: 30
threadpool.bulk.type: fixed
threadpool.bulk.size: 4
threadpool.bulk.queue_size: 30
threadpool.index.type: fixed
threadpool.index.size: 4
threadpool.index.queue_size: 30
thrift.port: 9500
indices.recovery.concurrent_streams: 4
indices.recovery.max_bytes_per_sec: 40mb
cluster.routing.allocation.cluster_concurrent_rebalance: 20
indices.cache.filter.size: 100mb
Open files limit :
elasticsearch@xxxx:/$ ulimit -Sn
1000000
elasticsearch@xxxx:/$ ulimit -Hn
1000000
elasticsearch@xxxx:/$
Here is the log file :
I started elasticsearch with option es.max-open-files=true .
[2013-11-07 15:01:22,538][INFO ][bootstrap ]
max_open_files [65510]
[2013-11-07 15:01:23,131][INFO ][node ] [Wiz Kid]
version[0.90.5], pid[15414], build[c8714e8/2013-09-17T12:50:20Z]
[2013-11-07 15:01:23,131][INFO ][node ] [Wiz Kid]
initializing ...
[2013-11-07 15:01:23,518][INFO ][plugins ] [Wiz Kid]
loaded [transport-thrift], sites [HQ, head]
[2013-11-07 15:02:31,792][INFO ][node ] [Wiz Kid]
initialized
[2013-11-07 15:02:31,894][INFO ][node ] [Wiz Kid]
starting ...
[2013-11-07 15:02:31,911][INFO ][thrift ] [Wiz Kid]
bound on port [9500]
[2013-11-07 15:02:32,100][INFO ][transport ] [Wiz Kid]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/172.31.11.162:9300]}
[2013-11-07 15:02:35,129][INFO ][cluster.service ] [Wiz Kid]
new_master [Wiz Kid][ftUK_RPoSAqTfFEDYMFGuw][inet[/172.31.11.162:9300]],
reason: zen-disco-join (elected_as_master)
[2013-11-07 15:02:35,141][INFO ][discovery ] [Wiz Kid]
elasticsearch_logs/ftUK_RPoSAqTfFEDYMFGuw
[2013-11-07 15:02:35,228][INFO ][http ] [Wiz Kid]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/172.31.11.162:9200]}
[2013-11-07 15:02:35,228][INFO ][node ] [Wiz Kid]
started
[2013-11-07 15:02:55,523][INFO ][gateway ] [Wiz Kid]
recovered [208] indices into cluster_state
Exception:
[2013-11-07 15:00:23,343][DEBUG][action.bulk ] [Darkdevil]
[logs-2013-11-07][2] failed to execute bulk item (index) index
{[logs-2013-11-07][logs][G6KBeJFeRoiwKaLXZUKq-g],
source[{"message":"xxxxxxxxxx","host":null,"date":"2013-11-07T13:57:45.917Z"}]}
org.elasticsearch.index.engine.CreateFailedEngineException:
[logs-2013-11-07][2] Create failed for [logs#G6KBeJFeRoiwKaLXZUKq-g]
at
org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:369)
at
org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:331)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:402)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:533)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:418)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException:
/var/lib/elasticsearch/elasticsearch_xxxxxxxxx/nodes/0/indices/logs-2013-11-07/2/index/_1kgc.fdt
(Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:466)
at
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:288)
at
org.apache.lucene.store.RateLimitedFSDirectory.createOutput(RateLimitedFSDirectory.java:41)
at
org.elasticsearch.index.store.Store$StoreDirectory.createOutput(Store.java:419)
at
org.elasticsearch.index.store.Store$StoreDirectory.createOutput(Store.java:409)
at
org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:62)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:109)
at
org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:120)
at
org.apache.lucene.index.StoredFieldsProcessor.initFieldsWriter(StoredFieldsProcessor.java:88)
at
org.apache.lucene.index.StoredFieldsProcessor.finishDocument(StoredFieldsProcessor.java:120)
at
org.apache.lucene.index.TwoStoredFieldsConsumers.finishDocument(TwoStoredFieldsConsumers.java:65)
at
org.apache.lucene.index.DocFieldProcessor.finishDocument(DocFieldProcessor.java:264)
at
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:283)
at
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:432)
at
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1513)
at
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1188)
at
org.elasticsearch.index.engine.robin.RobinEngine.innerCreate(RobinEngine.java:470)
at
org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:364)
... 8 more
There are 1040 shards . ( 5 shard / index ) .
XMS and XMX of elasticsearch were set to 4g.
The machine has 8 GB of memory, a Intel Xenon 5560 3GHZ 4 cores CPU.
Could anyone advise me how to configure elasticsearh ?
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.