Hello, I am running into issues where my JSON logs being sent to Logstash are sometimes failing to be parsed. The parse failure is just me sending bad data. However, Logstash does log that it failed to parse it:
"error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [app_data]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:37"}}}}}
So what I am hoping to learn is how I can have THIS log from logstash shipped off so I can see it in Kibana to alert me when my logs aren't making it over. I tried to google this, but "logstash logs to elasticsearch" as you can imagine is just sending me to "how to logstash" articles. Thanks!!
You might be able to use a DLQ. Also, the elasticsearch log probably has a more informative error message about what it didn't like about the document.
Thanks for the DLQ tip! I will definitely have to get that implemented. However, I am still interested in ingesting the fact of the failed parse. Am I thinking about this wrong? Should I not try to ingest logstash errors? I can usually see why it is failing from just looking at the log it attempted to ingest, but I want to know that it failed in the first place.
I don't want to have to check anywhere besides Kibana to learn that I have a problem I need to fix. Presently, I am having to say to my dev team "sorry you dont see the logs you thought you would, I will check the raw log for you on the logstash server, maybe there are more, let me just SSH in real quick". I would prefer instead to say "sorry to see we are getting notified of bad parses, I will get right on it".
Guess I was just hoping there was a logstash config like "stash_own_logs: 1". I will try to see if I can redirect the Rsyslog captured logstash errors back into logstash, thanks for your time and advice!!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.