Elasticsearch can't process data

I have configured persistent queue and the queue is full. Suddenly I receive this error log on my logstash server:

[2018-05-27T08:46:23,536][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>413, :url=>"http://elasticsearch_address:9200/_bulk"}

On my elastic master:

[2018-05-27T08:24:27,490][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [elasticsearch-master failed to put mappings on indices [[[index_name]]], type [doc]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [index_name] has been exceeded

And on my elastic data node:

[2018-05-27T08:53:43,105][DEBUG][o.e.a.b.TransportShardBulkAction] [index_name2][3] failed to execute bulk item (index) BulkShardRequest [index_name2][3]] containing [2] requests
    org.elasticsearch.index.mapper.MapperParsingException: failed to parse [level]

There is more indices, so it looks like the full queue is caused by more problems, thats why there is index_name and index_name2.

The first problem should be solved by changing index.mapping.total_fields.limit to bigger number, but when I change it to 5000, I am receiving:

[2018-05-27T08:45:13,027][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-master] [index_name/wFRH8zgMRf-X77TYvL-kYA] update_mapping [doc]

It looks like it cant be change now, because it still printing this log.

Why is my data node not able to process bulk request?

I thought data, which not fit mapping will be moved to DLQ and when DLQ is full the events will be deleted. But my queue is full. So how to debug and solve this issue and where is the problem?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.