Unbalanced disk usage with ES 2.4.x

I managed to get new logs after trying to add new nodes to the cluster. I've got many logged errors like the one bellow, after ES moved a few shards.

[2017-11-29 16:19:36,609][DEBUG][cluster.service          ] [els10] processing [indices_store ([[v3-large-customer-data-60][14]] active fully on other nodes)]: execute
[2017-11-29 16:19:36,609][DEBUG][indices.store            ] [els10] [v3-large-customer-data-60][14] failed to delete unallocated shard, ignoring
org.apache.lucene.store.LockObtainFailedException: Can't lock shard [v3-large-customer-data-60][14], timed out after 0ms
	at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:609)
	at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:537)
	at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:506)
	at org.elasticsearch.env.NodeEnvironment.deleteShardDirectorySafe(NodeEnvironment.java:344)
	at org.elasticsearch.indices.IndicesService.deleteShardStore(IndicesService.java:578)
	at org.elasticsearch.indices.store.IndicesStore$ShardActiveResponseHandler$1.execute(IndicesStore.java:303)
	at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)
	at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:480)
	at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:784)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)