One plausible reason is you have one or more index templates which are configured poorly. I'd look there (and any Logstash/Beats output configuration) and update them to a more sane number (certainly nothing larger than 12).
As for resolution to getting the shards to a reasonable number, you'll want to look whether these are primary shards or replica shards. If they're primary shards, there is a shrink API that you could use. You could also just delete high-shard indices if they aren't valuable anymore. You may, instead, simply have an extremely high (misconfigured) number of replicas, in which case you can just change the number of replicas for each index to a more reasonable number.
There's also a cluster allocation explain API which can give you insights into why a shard may not be assigned.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.