There are 300,000 to 900,0000 documents in each, small documents. The
big issue is sorting, we have lots of sorted fields, doing updates
followed by sorted queries on Compass killed us and is the main reason
for switching to ElasticSearch. So I'd like to get optimal
performance on updates in parallel with near real time sorted search
with, say, 300-500 updates a minute at peak time.
On Mar 12, 8:07 am, Shay Banon shay.ba...@elasticsearch.com wrote:
500mb indices are quite small, you probably don't need to split them up. Of course (sadly), this depends on many factors. For example, how many documents you have, do you do sorting / faceting, how much memory is allocated to elasticsearch.
On Friday, March 11, 2011 at 7:24 PM, davrob2 wrote:
I have read some threads about spliting indexes up and giving them all
the same alias in order to improve refresh and other performance.
Does anyone have an indication of an ideal index size for best
performance. My indexes are each currently about 500MB each, if I
split these up how many indexes should I split into?