Handling elastic mapping errors when using logstash elastic output plugin


We have built and deployed an ELK Solution for one of our partners suing Elastic Cloud and a Logstash Cluster. The logs are generated from many different apps developed by different teams and there have been instances where the datatypes does not match. How can we deal with this problem?

Saw a few threads/ issues in logstash git repo where people had similar issues. There were also discussions to come with a Dead letter queue which is not released yet. Is it possible to configure a queue or another ES index where such 400 errors can be logged. Currently the errors are partly read from SQS and the logs gets deleted.

Any suggestions / ideas to deal with this problem other than extending the elastic search output plugin ourselves.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.