Bulk index error during ingestion [Challenge]

Hello,

My team and I are running a custom plugin that we made this year. This elasticsearch instance is running over two nodes. Our big problem is that the machine that we are using is slow (2 * Intel core i3 - 1.3GHz) - it is a challenge to optimize our solution.

During the ingestion, we some times get this message in the logs:

[2018-05-15T14:25:29,342][ERROR][o.e.a.b.TransportBulkAction] [grlaaa01] failed to execute pipeline for a bulk request
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of org.elasticsearch.ingest.PipelineExecutionService$2@3ea8b251 on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@a7ebdbd[Running, pool size = 4, active threads = 4, queued tasks = 200, completed tasks = 92298]]
at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:50) ~[elasticsearch-5.6.7.jar:5.6.7]
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) ~[?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) ~[?:1.8.0_131]
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.doExecute(EsThreadPoolExecutor.java:94) ~[elasticsearch-5.6.7.jar:5.6.7]
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:89) ~[elasticsearch-5.6.7.jar:5.6.7]
at org.elasticsearch.ingest.PipelineExecutionService.executeBulkRequest(PipelineExecutionService.java:74) ~[elasticsearch-5.6.7.jar:5.6.7]
at org.elasticsearch.action.bulk.TransportBulkAction.processBulkIndexIngestRequest(TransportBulkAction.java:508) ~[elasticsearch-5.6.7.jar:5.6.7]
at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:136) ~[elasticsearch-5.6.7.jar:5.6.7]
at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:85) ~[elasticsearch-5.6.7.jar:5.6.7]
at
...

This message appears after quite a lot of entries (I believe 200 according to the error message). But does it mean that the last 200 entries are lost ? Partly ?
According to Why am I seeing bulk rejections in my Elasticsearch cluster? | Elastic Blog, no but can we be completely sure that some data aren't lost ?

By the way, the ingestion is done on one and only node, is that normal ?

Do you have any ideas of how optimizing the ingestion without touching the plugin ?

Best regards,

How many shards are you actively indexing into?

We are using 2 shards and 0 replicas.

PUT mdm
{
  "settings" : {
    "index" : {
      "number_of_shards" : 2, 
      "number_of_replicas" : 0 
    }
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.