I'm having trouble with the maximum index per node, my server has 8gb and -Xms, Xmx are both with 7GB, but I can only create 100 index on a node, the node crashes after 100 index. I wonder if there is some configuration that solves this problem.
I'm having trouble with the maximum index per node, my server has 8gb and
-Xms, Xmx are both with 7GB, but I can only create 100 index on a node,
the
node crashes after 100 index. I wonder if there is some configuration that
solves this problem.
I misread your post as "I can only index 100 documents", but the suggestion
still remains.
Is there anything in the logs that indicates what happen when the node
crashed? Each shard is a Lucene index, so many indexes can result in too
many files open. Which OS are you using? If *nix, what is your ulimit set
to?
I am using linux, elasticsearch.yml is the default.
When locking the index, it generates this error:
org.elasticsearch.index.engine.RefreshFailedEngineException: [0.93333500] [0] Refresh failed
at org.elasticsearch.index.engine.robin.RobinEngine.refresh (RobinEngine.java: 788)
at org.elasticsearch.index.shard.service.InternalIndexShard.refresh (InternalIndexShard.java: 403)
at
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java: 1110)
at java.util.concurrent.ThreadPoolExecutor $ Worker.run (ThreadPoolExecutor.java: 603)
at java.lang.Thread.run (Thread.java: 722)
Caused by: java.io.FileNotFoundException: / usr/local/share/elasticsearch/data/elasticsearch/nodes/0/indices/0.93333500/0/index/_0.prx (Too many open files)
java.io.RandomAccessFile.open at (Native Method)
at java.io.RandomAccessFile. (RandomAccessFile.java: 233)
org.apache.lucene.store.FSDirectory at $ FSIndexOutput. (FSDirectory.java: 441)
at org.apache.lucene.store.FSDirectory.createOutput (FSDirectory.java: 306)
at org.apache.lucene.store.XNIOFSDirectory.createOutput (XNIOFSDirectory.java: 48)
org.elasticsearch.index.store.Store at $ StoreDirectory.createOutput (Store.java: 487)
org.elasticsearch.index.store.Store at $ StoreDirectory.createOutput (Store.java: 459)
at org.apache.lucene.index.FormatPostingsPositionsWriter. (FormatPostingsPositionsWriter.java: 43)
at org.apache.lucene.index.FormatPostingsDocsWriter. (FormatPostingsDocsWriter.java: 57)
at org.apache.lucene.index.FormatPostingsTermsWriter. (FormatPostingsTermsWriter.java: 33)
at org.apache.lucene.index.FormatPostingsFieldsWriter. (FormatPostingsFieldsWriter.java: 51)
at org.apache.lucene.index.FreqProxTermsWriter.flush (FreqProxTermsWriter.java: 85)
at org.apache.lucene.index.TermsHash.flush (TermsHash.java: 113)
at org.apache.lucene.index.DocInverter.flush (DocInverter.java: 70)
at org.apache.lucene.index.DocFieldProcessor.flush (DocFieldProcessor.java: 60)
at org.apache.lucene.index.DocumentsWriter.flush (DocumentsWriter.java: 581)
at org.apache.lucene.index.IndexWriter.doFlush (IndexWriter.java: 3587)
at org.apache.lucene.index.IndexWriter.flush (IndexWriter.java: 3552)
at org.apache.lucene.index.IndexWriter.getReader (IndexWriter.java: 450)
at org.apache.lucene.index.IndexWriter.getReader (IndexWriter.java: 399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter (DirectoryReader.java: 413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged (DirectoryReader.java: 432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged (DirectoryReader.java: 375)
at org.apache.lucene.index.IndexReader.openIfChanged (IndexReader.java: 508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded (SearcherManager.java: 109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded (SearcherManager.java: 57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh (ReferenceManager.java: 137)
at org.elasticsearch.index.engine.robin.RobinEngine.refresh (RobinEngine.java: 769)
... 5 more
What is the crash message you encounter? How many shards do you create per
index? How many nodes did you set up and how many documents did you indexed?
If you went with the default setting of 5 shards per index, you created 500
Lucene indexes on a single machine, which is IMHO a quite impressive
number, if they are filled with documents and being searched.
Of course you can lower the shard count to 1, so you can create 500
physical indexes. If you use aliases for indexes and routing, you can have
thousands and thousands of indexes from a logical point of view via the
API.
You would need to increase your ulimit, but better yet, perhaps you should
refactor your index design and reduce the number of indices. Follow all
of Jörg's suggestions.
usr/local/share/elasticsearch/data/elasticsearch/nodes/0/indices/0.93333500/0/index/_0.prx
(Too many open files)
java.io.RandomAccessFile.open at (Native Method)
at java.io.RandomAccessFile. (RandomAccessFile.java: 233)
org.apache.lucene.store.FSDirectory at $ FSIndexOutput.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.