Optimal Index size

Hi,

I have read some threads about spliting indexes up and giving them all
the same alias in order to improve refresh and other performance.

Does anyone have an indication of an ideal index size for best
performance. My indexes are each currently about 500MB each, if I
split these up how many indexes should I split into?

Regards,

David.

500mb indices are quite small, you probably don't need to split them up. Of course (sadly), this depends on many factors. For example, how many documents you have, do you do sorting / faceting, how much memory is allocated to elasticsearch.
On Friday, March 11, 2011 at 7:24 PM, davrob2 wrote:

Hi,

I have read some threads about spliting indexes up and giving them all
the same alias in order to improve refresh and other performance.

Does anyone have an indication of an ideal index size for best
performance. My indexes are each currently about 500MB each, if I
split these up how many indexes should I split into?

Regards,

David.

There are 300,000 to 900,0000 documents in each, small documents. The
big issue is sorting, we have lots of sorted fields, doing updates
followed by sorted queries on Compass killed us and is the main reason
for switching to Elasticsearch. So I'd like to get optimal
performance on updates in parallel with near real time sorted search
with, say, 300-500 updates a minute at peak time.

On Mar 12, 8:07 am, Shay Banon shay.ba...@elasticsearch.com wrote:

500mb indices are quite small, you probably don't need to split them up. Of course (sadly), this depends on many factors. For example, how many documents you have, do you do sorting / faceting, how much memory is allocated to elasticsearch.

On Friday, March 11, 2011 at 7:24 PM, davrob2 wrote:

Hi,

I have read some threads about spliting indexes up and giving them all
the same alias in order to improve refresh and other performance.

Does anyone have an indication of an ideal index size for best
performance. My indexes are each currently about 500MB each, if I
split these up how many indexes should I split into?

Regards,

David.

Then there is no need to split them up, just make sure you have enough memory for the sorting option. You can check that using the node stats API, which lists the field cache size (the cache used for sorting). If you start to see evictions there, then its due to memory constraints.
On Saturday, March 12, 2011 at 3:00 PM, davrob2 wrote:

There are 300,000 to 900,0000 documents in each, small documents. The
big issue is sorting, we have lots of sorted fields, doing updates
followed by sorted queries on Compass killed us and is the main reason
for switching to Elasticsearch. So I'd like to get optimal
performance on updates in parallel with near real time sorted search
with, say, 300-500 updates a minute at peak time.

On Mar 12, 8:07 am, Shay Banon shay.ba...@elasticsearch.com wrote:

500mb indices are quite small, you probably don't need to split them up. Of course (sadly), this depends on many factors. For example, how many documents you have, do you do sorting / faceting, how much memory is allocated to elasticsearch.

On Friday, March 11, 2011 at 7:24 PM, davrob2 wrote:

Hi,

I have read some threads about spliting indexes up and giving them all
the same alias in order to improve refresh and other performance.

Does anyone have an indication of an ideal index size for best
performance. My indexes are each currently about 500MB each, if I
split these up how many indexes should I split into?

Regards,

David.