[Scaling elastic server] how much load can elastic search handle?

I would suggest that you install something like bigdesk or marvel to check
your usage, in particular heap, threads and file descriptors.

Every shard is a Lucene index and hence the more shards you have the more
searches that you can do in parallel but you also need memory and file
descriptors for every shard.

I don't believe that anyone could predict with any certainty how quick your
searches will be as there are many variables. Try running your queries with
one of the query browsers like sense and you will see how long the search
took, it is in the took field of the reply.

On Tuesday, 29 April 2014 15:02:24 UTC+2, Abrar Sheikh wrote:

Hi,

I have a single aws EC2 large instance with 7.5 GB ram and 100 GB
harddrive dual core 2.6 GHz. My elastic instance on a average has around
10,000,000 records. I use somewhat complex queries. I am calling elastic
apis from my PHP code which is exposed as a rest service(needed to do some
post processing of data). my question is how much load can my server
handle and at what point do i shift to a multi node architecture. What
effect does # of shards and replication have on performance. With my
current system configuration how many queries per second(qps) can my
Elasticsearch handle?

Thanks and Regards,
Abrar.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b3bf7edb-54df-4c26-ae4a-da73e24b6df0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.