last restart of our cluster caused a heavy load as a couple of rivers are
triggered which load alltogether about 40 GB of data into the (single-node)
cluster.
This lead to massive exceptions like shown in the logs:
[6726]: index [my_index], type [my_doc_type], id [2390342], message
[EsRejectedExecutionException[rejected execution (queue capacity 50) on
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1@4a94944]]
We didn't change any of the threadpools queue size settings (like
threadpool.bulk.queue_size). AFAIK in this case the queue size is unlimited.
So how is it possible that the exception above can occur? Why does the log
message indicate a queue size of 50?
Is this with JDBC river? If so, you can increase the number of bulk actions
per request, to decrease the concurrent bulk actions, to remedy situations
where many rivers are active.
last restart of our cluster caused a heavy load as a couple of rivers are
triggered which load alltogether about 40 GB of data into the (single-node)
cluster.
This lead to massive exceptions like shown in the logs:
[6726]: index [my_index], type [my_doc_type], id [2390342], message
[EsRejectedExecutionException[rejected execution (queue capacity 50) on
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1@4a94944
]]
We didn't change any of the threadpools queue size settings (like
threadpool.bulk.queue_size). AFAIK in this case the queue size is
unlimited.
So how is it possible that the exception above can occur? Why does the log
message indicate a queue size of 50?
Yes it happened with jdbc river. Thanks for clarification.
Am Montag, 23. Februar 2015 17:16:09 UTC+1 schrieb Jörg Prante:
Is this with JDBC river? If so, you can increase the number of bulk
actions per request, to decrease the concurrent bulk actions, to remedy
situations where many rivers are active.
last restart of our cluster caused a heavy load as a couple of rivers are
triggered which load alltogether about 40 GB of data into the (single-node)
cluster.
This lead to massive exceptions like shown in the logs:
[6726]: index [my_index], type [my_doc_type], id [2390342], message
[EsRejectedExecutionException[rejected execution (queue capacity 50) on
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1@4a94944]]
We didn't change any of the threadpools queue size settings (like
threadpool.bulk.queue_size). AFAIK in this case the queue size is
unlimited.
So how is it possible that the exception above can occur? Why does the
log message indicate a queue size of 50?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.