What is bulk.rejected?

I found this URL but I don't understand bulk.rejected information:

URL: localhost:9200/_cat/thread_pool?v&h=name,host,bulk.active,bulk.rejected,bulk.completed,bulk.queue,bulk.queueSize

What is this and how can I resolve this possible problem?

If the queue fills up to its limit, new work units will begin to be rejected, and you will see that reflected in the rejected statistic

I lost documents?

Potentially, if your application wasn't built to handle rejections coming back from the bulk indexer. See the "Bulk Rejections" section in the book: https://www.elastic.co/guide/en/elasticsearch/guide/current/_monitoring_individual_nodes.html

Basically, your app will receive bulk rejection exceptions when Elasticsearch is at capacity and the queue is full. This isn't technically an error...it just means "try again later". The way to handle these is to collect the rejected documents, wait 1-5s and try to reindex it.

If your app wasn't doing that...your application may have dropped some documents on the floor :confused:


and there is a way to configure logstash 1.4.2 to do this?

unfortunately I use this old version of logstash... because I change some plugins to correct some bugs....

It looks like Logstash retries rejected docs up to three times, based on this PR: https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/2

I'm not sure how recent that is, and if your version had it or not. I'd ask in the logstash section of the forum for more info (I'm afraid I don't know a ton about Logstash).

Is it represent the # of last documents or the # of lost bulks? (bulk.rejected)

bulk.rejected is the number of bulk requests that were rejected, not the number of documents. Five bulks may be rejected, each containing 1000 documents, and the counter would just read 5.

1 Like