Queue Capacity Sizing

I am receiving the below error. The node is running ES 2.3.3

[9992]: index [bro-2016.06.17], type [dns], id [AVVfLgt-PuabKO2qHMAs], message [RemoteTransportException[[WORKER_NODE_7][10.1.55.14:9300][indices:data/write/bulk[s]]]; nested: RemoteTransportE  xception[[WORKER_NODE_7][10.1.55.14:9300][indices:data/write/bulk[s][p]]]; nested: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@ac859e6 on EsThreadPoolExecutor[  bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@219eb920[Running, pool size = 16, active threads = 16, queued tasks = 114, completed tasks = 65446]]];]

Should I be increasing my threadpool size? I have a cluster of approx 12 data nodes and 10 client nodes and each of those client nodes are also running logstash. The node referenced in the error (the only one throwing such an error) currently has 32 gig of RAM and a 6TB HDD

Any help would be appreciated

Ideally, no.
Full threadpools are due to something else, so you need to find what that is. Are you measuring ES metrics, things like load, heap use, indexing/querying rates?

I am using marvel for monitoring. The strange part is that it only effects
this one node.

I am wondering what I should be looking for to identify what is causing
this. I use client nodes for all applications to connect to (logstash,
kibana, etc) so I'm not sure why this one node (which is a data node) would
be over whelmed with requests.

Is there something I could provide which would help to identify the source
of this error.

Are you load balancing/round robin-ing the bulk requests?

Yes. In my logstash config the output lists 10 client nodes to use when
ingesting data. Is there a specific option to enable to ensure they move
from one to the next when they get a busy response.