I am trying to understand if there is a method in logstash to handle failures that occur when writing to ElasticSearch.
I know that logstash has a retry policy for certain types of exceptions, but the failures I am looking to handle are permanent failures, such as ES rejecting an event (because maybe the index field has an uppercase character, or maybe the target index doesn't exist in ES, etc.)
In such cases the document will never be accepted by ES.
Is there any way to monitor for this condition, or potentially use a different output for events that were rejected by ES?
This would help to identify events that failed to parse by Grok, not events that are being rejected by ES itself.
Consider the following output configuration:
Now the following event arrives:
{
"doc_id":12345.
"index":"MYINDEX",
"value":"some_string"
}
This is a perfectly valid event from logstash's perspective, it parses just fine in grok, but ES is going to reject it because the index is in uppercase... and there doesn't seem to be any way to detect this type of failure.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.