Elasticsearch in my experience scales badly with large number of small indices like you get when you have indices per user. This has been improved in recent versions, but there is still a reason the default limit is there.
This has been asked a number of times over the years (most of the refrences I could find are a bit old) and I believe the recommendation still is to avoid trying to scale this way. If this in any way has changed I am sure someone will chime in and correct me.
Assuming the indices are small I would start by setting the number of primary shards to 1. That will reduce the shard count by 50%.
The best way forward after that depends a bit on how many users you will need to support and the use case in terms of mappings and queries/aggregations used. Another aspect is whether users have direct access to Elasticsearch or not. If you can provide some additional deatils on this we may be able to provide some suggestions.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.