The easiest option would be if you have all of your data available to just reindex from scratch (on a brand-new Elasticsearch 2.1.1 cluster!).
If this is not an option, since your cluster is an inconsistent state, it's difficult to say exactly what state your data is in and whether or not there is a single node that holds all of the data. It looks like there is a chance that only node
espav02 is out of sync; the other two nodes might have all the data because they mutually hold copies of each shard that match the document count on the other, and that number of documents is the maximum for that shard copy.
It would be best to conduct the following operations during a long maintenance window. It would not be a bad idea to first test this process on another cluster. You could even test it on your laptop with three nodes running on your laptop (you don't need a full set of the documents, and you don't need to start from an inconsistent state to verify that the following process ends in a consistent state). I can't overstate the importance of doing this during a long maintenance window, and testing the operation first.
Turn off off any sources of indexing activity. Then, you should verify that these copies are in fact "good" copies.
For the next step, note that you will be offline for reads.
Then, shutdown Elasticsearch on all three nodes and make a backup of the cluster.
After you have verified that that they are "good" copies and have taken a backup, start Elasticsearch on nodes
espav03, and wait for the cluster to get to a yellow state (it should promote a replica copy on either
espav03 for shard
0 to be a primary).
At this step, you should be back online for reads but keep any sources of indexing activity turned off.
Now, set the number of replicas to one. This will get the cluster to a green state. Then, move (but do not delete) the data directory on
espav02. Now startup
espav02. After it has started, set the number of replicas back to two. This will start a recovery from the copies that you verified were good to
Do note that during the recovery process you will see a lot of network activity and your cluster might not be responsive.
After this, the document counts should match.
After you have verified the cluster is in a consistent state and that all of the data that you expect to be there is in fact there, you can restart your indexing sources.