Index size is 30 Gg

Hi,

How would you recommend architecting our cluster? (OR, What is the optimal size for an index?)

We have about 35 indices running on one node cluster averaging 20Mg each. All of the indices have similar properties except one. Its the one dedicated to attachments and it uses the attachments plugin. It is currently approximately 30Gg, but its only about 10% of our yet-to-be-elasticable attachments. How big can it grow assuming we're running a machine with 8G memory, with 4G going to elastic (following this thread). Up until now we had 2 failures, where the last logs are from one of the attachments shards causing a heap out of memory failure.

Also, what if these other 34 indices goes over 100 Mb each?

I've seen a similar question here, but felt we it isn't quite the same properties.

Thanks,
Eyal.

Hello Eyal,

I think there's no magic number for the optimal size of an index. But I
know that more indices will typically consume more memory than less
indices, becaues you'd have more shards.

Many 20MB indices seem to be too fragmented for my taste, but then it also
matters how your data gets accessed.

Also, when you run out of memory, it's interesting to know why that
happened. For example, is it that your field cache
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.htmlis
too large, because it's unbounded by default?

One way to know is by checking the node's
infohttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-info.htmlor
to monitor your ES with something like our SPM
for Elasticsearchhttp://sematext.com/spm/elasticsearch-performance-monitoring/
.

Also, what's the version of your ES? For most situations, 0.90+ will use
less memory than earlier versions, especially when faceting.

On Wed, Oct 9, 2013 at 10:29 PM, Eyal gneyal@gmail.com wrote:

Hi,

How would you recommend architecting our cluster? (OR, What is the optimal
size for an index?)

We have about 35 indices running on one node cluster averaging 20Mg each.
All of the indices have similar properties except one. Its the one
dedicated
to attachments and it uses the attachments plugin. It is currently
approximately 30Gg, but its only about 10% of our yet-to-be-elasticable
attachments. How big can it grow assuming we're running a machine with 8G
memory, with 4G going to elastic (following this
<
http://elasticsearch-users.115913.n3.nabble.com/How-much-memory-to-allocate-to-heap-td4029248.html

thread). Up until now we had 2 failures, where the last logs are from one
of
the attachments shards causing a heap out of memory failure.

Also, what if these other 34 indices goes over 100 Mb each?

I've seen a similar question here
<
http://elasticsearch-users.115913.n3.nabble.com/Optimal-Index-size-td2665918.html

, but felt we it isn't quite the same properties.

Thanks,
Eyal.

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Index-size-is-30-Gg-tp4042310.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.