How many indices can elasticsearch handle

Hai,
I am currently running my Elasticsearch cluster on two EC2 d2.2x large machines, so i had 8 cores, 32 gb heap space and 32gb left for os, and 11tb hdd harddisk on each machine and one EC2 d2 x large for master only node. Initially i thought of maintaining indices for each customer with 10 shards each, where i touched Too many open files exception. Then i reduced my indices to 200 and tried, now with even 2000 primary and 2000 replica shards all together is also giving too many open files. My max open files limit is set to 90000 on each box. So, what could be the apt number of indices this cluster can maintain with concurrent indexing into all the indices continuously.

Shard size should be based partially on how big the index is, so how large are they and how many indices in total?

Actually i couldn't get the shard size, because at the time of indexing only cluster is failing. when i run indexing into 200 indices each of 10 shards and 1 replica, i am getting too many open files exception. So i want to know on this cluster, how many indices can i maintain without these exceptions.

Why do you configure 10 shards if you only have two nodes in your cluster? Even a single shard would be enough to use all machines since one of them could host the primary and the other one the replica (assuming you're using the default of having one replica for each primary shard).

instead of creating all indices at once, why don't you start with a few indices with reasonable # of shards and on-board customers data... as shard sizes start touching the recommended limit (i guess ~50 GB)... create new indices...

This way you will create only required # of indices and u will come to know when there is a time your nodes can't handle the data so u can scale... Since u will have a couple of shards in each index, each index will start leveraging the new node(s) too

@jpountz
I am configuring 10 shards, because due to some concurrency issues of having more shards, i could see an indexing rate rise in 10 shards than a single shard model.
@mosiddi
My requirement is to maintain a separate index for every customer, And all the customers will generate data concurrently, So i have to index them concurrently into their indices.

Separate index for customer is definitely going to be a costly option. Plan to have shared indices with right partitioning and isolation. Ensure you put reasonable # of customers in each shared index so the account migration / index maintenance will be maintainable/reasonable in future.

What would be the cost if i have to maintain 200 shared indices with 2 shards and 1 replica with continuous concurrent indexing , currently i am using 2 (8 cores, 32 gb ram , 11tb hdd disk) boxes as data nodes.

You need to test and figure it out, there is no set value for this as it depends on many things.

I recommend reading this page for insight

https://www.elastic.co/guide/en/elasticsearch/guide/current/deploy.html

I would check if the max open files are correctly applied if you still get the error.

You will run into problems with too many shards, but usually different ones like memory shortage and others if the max open files are configured correctly on your system.

I think there is some REST Stat call that shows the actual max open files, only i could not find it just now...