Too many nodes started up on some data nodes - best approach to fix?

I restarted my cluster the other day, but something odd stuck, resulting in
15/16 data nodes starting up an extra ES instance in the same cluster. This
ended badly as there were two nodes with identical display names, the
system locked up, etc.
When restarting again, to my horror, we were missing shards. I quickly
figured out that the missing shards had gotten moved into the second
instance storage location.
What is the best way to resolve this? Should we either spawn second ES
instances on the culprit machines (with different instance names), or can a
simple
mv escluster/nodes/1/indices/data1/* escluster/nodes/0/indices/data1/
do the job?

Thanks!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5ec79698-5ea3-4ba9-a81d-0665a23a9bd5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.