Unable to create new indexe because of Logstash and ELastic error

Hi Team,
Its been some days since the last index which i have created, today while trying to create a new index for apache access log,I am getting the below error while executing the logstash.conf file. I am working on development mode with all(filebeat, Logstash, ELastic and Kibana)components installed on the same server, I am using 6.2.4 version.

Error Details:

[2018-08-01T02:22:25,781][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@
352190c1 on EsThreadPoolExecutor[name = mskWCFK/bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@403b1581[Running, pool size = 4, active threads = 4, queued tasks = 200, completed tasks = 595180]]
"})

From Elastic logs:

[2018-08-01T01:23:02,138][INFO ][o.e.x.m.MlDailyMaintenanceService] Successfully completed [ML] maintenance tasks
[2018-08-01T01:54:02,104][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [.monitoring-logstash-6-2018.08.01] creating index, cause [auto(bulk api)], templates [.monitoring-logstash], shards [1]/[0], mappings [doc]
[2018-08-01T02:22:22,640][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.06.11] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,707][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.05.20] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,725][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.06.10] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,759][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.05.21] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,768][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.06.13] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]

The error message is telling the client (logstash in this case), that it's already at capacity and using all of its resources but cannot keep up with new indexing requests.

logstash is going to retry this later, but there is no guarantee that the elasticsearch cluster is not going to be overloaded then. It might make sense to increase capacity by adding new or faster nodes if this is a constant issue.

This here might be interesting for you https://www.elastic.co/guide/en/logstash-versioned-plugins/versioned_plugin_docs/v9.2.0-plugins-outputs-elasticsearch.html#_retry_policy

Hi,
I cant believe this issue has occurred because of resource scarcity. Also a day before i tried creating a simple index for apache logs with some new changes in filters(logstash conf) file thats when i got the error.

As you could see, it started creating the bulk indexes and in kibana it started showing new logstast indices matching my changes and listed almost twenty indices with all dates in the month. If it is to be the resource scarcity issue it shouldn't have created additonal indices at the first place.

2018-08-01T02:22:22,640][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.06.11] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,707][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.05.20] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,725][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.06.10] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,759][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.05.21] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2018-08-01T02:22:22,768][INFO ][o.e.c.m.MetaDataCreateIndexService] [mskWCFK] [logstash-2018.06.13] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]

Additional Information..

 I could see that indices are getting created for each date timestamp available in the messages fields, there are around 35 indexes getting created while ingesting the data now. Also after seeing them and deleting those, the cluster state seems to be in green, Is this problem due to incorrect date filter!!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.