Limit of total fields [num] in index [logstash-XX] has been exceeded

Little background story first: We had an ELK 2.6 stack, which we decided to upgrade to 6.5. After creating the new stack, we decided to use the reindex API to copy our old Logstash indexes from the old stack to the new one, skipping the documents that threw an error. After some time, we started getting those errors at our Logstash logs, pointing to a limit in the number of fields. We increased the limit of the Logstash index template, and everything started working fine. But after a few weeks, we started getting those same errors, but now they were exceeding the new number of fields.
Could this be related to changes in pipeline.batch.size and pipeline.batch.dealy parameters in logstash.yml (we did some tuning of our logstash processes)?

No, those would not result in additional fields being created.

I know, and maybe it was a coincidence, but when I changed those values, the error started happening.
Adding here some of the errors I'm getting:

[2019-10-30T14:13:19,567][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2019.10.30", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x51c46abb>], :response=>{"index"=>{"_index"=>"logstash-2019.10.30", "_type"=>"doc", "_id"=>"XnEDHW4BAwjYVExVfrLz", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [3000] in index [logstash-2019.10.30] has been exceeded"}}}}

Do you really have spaces in your field names?

In Kibana you can see a list of the field names in the index. If I recall correctly it is under index management. Does that list look reasonable to you?

Not really, that's related to a mapping error I'm also fixing (removed it from the error I pasted earlier).
Is there a way to check the document that it's supposed to have more than X fields (3000 in this case)?

It is not saying that any single document has that many fields. It is saying that there are 3000 members in the set of fields that occur across all documents that have been indexed. It could be 3000 documents, each of which has a unique field name. That's why I am suggesting you review the list in Kibana.

Considering all our apps report to that ELK, I checked the amount of fields in the logstash indices and saw they increased over time. I need to check why, but that seems to be the cause of this particular problem.
Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.