Some of the data nodes not processing requests

Hi I am new to ElasticSerach. In our ElasticSearch cluster deployment we have total 12 nodes. Master-3, Data-7 and Client-3. During a load test we observed that out of 7 data nodes only 4 nodes participating and their CPU utilization reaching 100%. Other 3 nodes I am assuming not participating because their CPU utilization is negligible.

Our goal is to improve scalability of our application which uses ElasticSearch. Currently CPU utilization of ES data nodes hindering the scalability.

Please advice how to diagnose why 3 data nodes not participating in processing?

Are the indices you are indexing into and querying distributed evenly across all data nodes?

How can I check?

You can get this information through the _cat/shards API.

Thank you Christian, Here is the output

index shard prirep state docs store ip node
core_index_production_2 4 p STARTED 1672776 38.3gb 127.0.0.1 node1
core_index_production_2 0 p STARTED 1455556 28.8gb 127.0.0.1 node2
core_index_production_2 3 p STARTED 1468302 28.8gb 127.0.0.1 node3
core_index_production_2 1 p STARTED 1477667 29.1gb 127.0.0.1 node4
core_index_production_2 2 p STARTED 1665801 41.3gb 127.0.0.1 node5

Are you using routing or parent-child relationships that could cause some shards to receive more load than others? Are these the only shards in the cluster?

Are you using routing or parent-child relationships that could cause some shards to receive more load than others?
No idea. Please share command for checking this.

Are these the only shards in the cluster?
Complete output is as follows.

index shard prirep state docs store ip node
core_index_production_2 1 p STARTED 1477667 29.1gb 127.0.0.1 node1
core_index_confirmation_production 1 p STARTED 19 117.9kb 127.0.0.1 node1
core_index_production_2 4 p STARTED 1672776 38.3gb 127.0.0.1 node2
core_index_invitation_production 0 p STARTED 209 130.9kb 127.0.0.1 node2
core_index_notification_production 0 p STARTED 20 22.1kb 127.0.0.1 node2
core_index_invitation_production 3 p STARTED 192 132.1kb 127.0.0.1 node3
core_index_notification_production 2 p STARTED 11 14.8kb 127.0.0.1 node3
core_index_confirmation_production 0 p STARTED 23 20.5kb 127.0.0.1 node3
core_index_production_2 0 p STARTED 1455556 28.8gb 127.0.0.1 node4
core_index_invitation_production 2 p STARTED 235 167.9kb 127.0.0.1 node4
core_index_confirmation_production 3 p STARTED 23 116.5kb 127.0.0.1 node4
core_index_invitation_production 1 p STARTED 232 140.3kb 127.0.0.1 node5
core_index_notification_production 1 p STARTED 12 10.9kb 127.0.0.1 node5
core_index_confirmation_production 2 p STARTED 20 138.7kb 127.0.0.1 node5
core_index_production_2 3 p STARTED 1468302 28.8gb 127.0.0.1 node6
core_index_notification_production 4 p STARTED 12 18.3kb 127.0.0.1 node6
core_index_confirmation_production 4 p STARTED 15 111.8kb 127.0.0.1 node6
core_index_production_2 2 p STARTED 1665801 41.3gb 127.0.0.1 node7
core_index_invitation_production 4 p STARTED 219 139.5kb 127.0.0.1 node7
core_index_notification_production 3 p STARTED 14 19.6kb 127.0.0.1 node7