Maximum limit for index

Hi,

I'm having trouble with the maximum index per node, my server has 8gb and -Xms, Xmx are both with 7GB, but I can only create 100 index on a node, the node crashes after 100 index. I wonder if there is some configuration that solves this problem.

thanks

Elasticsearch by default uses memory outside of the JVM heap to store the
index data. Look at the cache.memory.direct setting:

You should lower your JVM heap to roughly 50% of your total memory. JVM
heap is used for the field and filter cache.

Cheers,

Ivan

On Tue, Dec 11, 2012 at 4:30 AM, Honjoya ti.honjoya@gmail.com wrote:

Hi,

I'm having trouble with the maximum index per node, my server has 8gb and
-Xms, Xmx are both with 7GB, but I can only create 100 index on a node,
the
node crashes after 100 index. I wonder if there is some configuration that
solves this problem.

thanks

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Maximum-limit-for-index-tp4026835.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--

--

I misread your post as "I can only index 100 documents", but the suggestion
still remains.

Is there anything in the logs that indicates what happen when the node
crashed? Each shard is a Lucene index, so many indexes can result in too
many files open. Which OS are you using? If *nix, what is your ulimit set
to?

--
Ivan

On Tue, Dec 11, 2012 at 4:30 AM, Honjoya ti.honjoya@gmail.com wrote:

with the maximum index per node, my serve

--

Hi Ivan.

I am using linux, elasticsearch.yml is the default.
When locking the index, it generates this error:

org.elasticsearch.index.engine.RefreshFailedEngineException: [0.93333500] [0] Refresh failed
at org.elasticsearch.index.engine.robin.RobinEngine.refresh (RobinEngine.java: 788)
at org.elasticsearch.index.shard.service.InternalIndexShard.refresh (InternalIndexShard.java: 403)
at
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java: 1110)
at java.util.concurrent.ThreadPoolExecutor $ Worker.run (ThreadPoolExecutor.java: 603)
at java.lang.Thread.run (Thread.java: 722)
Caused by: java.io.FileNotFoundException: / usr/local/share/elasticsearch/data/elasticsearch/nodes/0/indices/0.93333500/0/index/_0.prx (Too many open files)
java.io.RandomAccessFile.open at (Native Method)
at java.io.RandomAccessFile. (RandomAccessFile.java: 233)
org.apache.lucene.store.FSDirectory at $ FSIndexOutput. (FSDirectory.java: 441)
at org.apache.lucene.store.FSDirectory.createOutput (FSDirectory.java: 306)
at org.apache.lucene.store.XNIOFSDirectory.createOutput (XNIOFSDirectory.java: 48)
org.elasticsearch.index.store.Store at $ StoreDirectory.createOutput (Store.java: 487)
org.elasticsearch.index.store.Store at $ StoreDirectory.createOutput (Store.java: 459)
at org.apache.lucene.index.FormatPostingsPositionsWriter. (FormatPostingsPositionsWriter.java: 43)
at org.apache.lucene.index.FormatPostingsDocsWriter. (FormatPostingsDocsWriter.java: 57)
at org.apache.lucene.index.FormatPostingsTermsWriter. (FormatPostingsTermsWriter.java: 33)
at org.apache.lucene.index.FormatPostingsFieldsWriter. (FormatPostingsFieldsWriter.java: 51)
at org.apache.lucene.index.FreqProxTermsWriter.flush (FreqProxTermsWriter.java: 85)
at org.apache.lucene.index.TermsHash.flush (TermsHash.java: 113)
at org.apache.lucene.index.DocInverter.flush (DocInverter.java: 70)
at org.apache.lucene.index.DocFieldProcessor.flush (DocFieldProcessor.java: 60)
at org.apache.lucene.index.DocumentsWriter.flush (DocumentsWriter.java: 581)
at org.apache.lucene.index.IndexWriter.doFlush (IndexWriter.java: 3587)
at org.apache.lucene.index.IndexWriter.flush (IndexWriter.java: 3552)
at org.apache.lucene.index.IndexWriter.getReader (IndexWriter.java: 450)
at org.apache.lucene.index.IndexWriter.getReader (IndexWriter.java: 399)
at org.apache.lucene.index.DirectoryReader.doOpenFromWriter (DirectoryReader.java: 413)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged (DirectoryReader.java: 432)
at org.apache.lucene.index.DirectoryReader.doOpenIfChanged (DirectoryReader.java: 375)
at org.apache.lucene.index.IndexReader.openIfChanged (IndexReader.java: 508)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded (SearcherManager.java: 109)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded (SearcherManager.java: 57)
at org.apache.lucene.search.ReferenceManager.maybeRefresh (ReferenceManager.java: 137)
at org.elasticsearch.index.engine.robin.RobinEngine.refresh (RobinEngine.java: 769)
... 5 more

What is the crash message you encounter? How many shards do you create per
index? How many nodes did you set up and how many documents did you indexed?

If you went with the default setting of 5 shards per index, you created 500
Lucene indexes on a single machine, which is IMHO a quite impressive
number, if they are filled with documents and being searched.

Of course you can lower the shard count to 1, so you can create 500
physical indexes. If you use aliases for indexes and routing, you can have
thousands and thousands of indexes from a logical point of view via the
API.

Jörg

--

Exactly as I expected:Too many open files

You would need to increase your ulimit, but better yet, perhaps you should
refactor your index design and reduce the number of indices. Follow all
of Jörg's suggestions.

Cheers,

Ivan

On Wed, Dec 12, 2012 at 12:08 PM, Honjoya ti.honjoya@gmail.com wrote:

Caused by: java.io.FileNotFoundException: /

usr/local/share/elasticsearch/data/elasticsearch/nodes/0/indices/0.93333500/0/index/_0.prx
(Too many open files)
java.io.RandomAccessFile.open at (Native Method)
at java.io.RandomAccessFile. (RandomAccessFile.java: 233)
org.apache.lucene.store.FSDirectory at $ FSIndexOutput.

--

Actually the problem was related to "Too many open files"

Thank you all for your help ..

Thanks.