This pushes data to kafka and then logstash consumes from it. The problem is, filebeat is using \u003c and \u003e instead of < and > in the message, which makes a _grokparsefailure in my logstash. Encoding is utf8 encoding: utf-8.
It seems unlikely that Logstash's JSON deserializer wouldn't translate \u003c to <. Please show your grok filter and what a failed events looks like. Use a stdout { codec => rubydebug } output and copy/paste its output.
and the output message you can think like a text with \u003c and \u003einstead of < and >. It was working fine with version 2, but just now I switched to v5 and testing on it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.