Handling failures in ES output plugin

I get this message
[2018-10-30T18:53:51,945][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"sunshine-txlog-2018.10.30", :_type=>"folder", :_routing=>nil}, #<LogStash::Event:0x4056f86f>], :response=>{"index"=>{"_index"=>"sunshine-txlog-2018.10.30", "_type"=>"folder", "_id"=>"AWbGUyyOzlZ3V1ugVeYw", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [1000] in index [sunshine-txlog-2018.10.30] has been exceeded"}}}}.

I am pretty new to logstash and ES so was wondering if there is a way to create a metric to see how many unique fields are being send to ES at a time also i think there are fields with bad key are getting through the ES (Is there way to catch it and put it in separate index).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.