It looks like your average shard size is just over 300MB in size, which is very, very small. Having and querying large number of small shards cause bad performance. I would recommend that you read this blog post and look to reduce the number of shards in the cluster by at least a factor of 10.
It would also help if you could let us know which version you are using and what the specification of your nodes are.
If you are using networtked storage I would recommend you look at disk I/O, utilization and iowait, e.g. using iostat. It is quite likely this is what is limiting your performance so it is worth investigating this first.
If you have a number of reasonably sized shards, that means the average size of the others is even smaller which is not good. I stand by my recommendation to dramatically reduce the number of shards.
If node4 is the one you have highlighted it seems like it has an unusual number of very small shards. As I do not know how your indices are queried I can not tell whether this would lead to higher load than the others.
This is the node I would start looking at disk I/O on though.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.