Because what I get in Kibana is the entire unparsed message with all the fields inside. Logstash says config is fine and runs OK. But something is wrong during mapping data types, as I understand it. Not sure how it should be done otherwise. Thanks for any advise or idea. I'm using ELK 7.
You are saying that the grok filter parses [message] into those fields? Because I would not expect that unless you have set config.support_escapes. \t is not parsed as a tab in grok, use a literal tab in the grok pattern (obviously if you use an editor like vi that means you cannot have the expandtab option enabled)
Not sure if this works or not (logstash can be surprisingly forgiving of using arrays where hashes are expected, but surprisingly unforgiving where duplicate options occur). I would write this as
That said, grok will produce a string by default for any pattern match, and you can adjust that by changing %{NUMBER:id} to %{NUMBER:id:int} and then remove the mutate filter.
This pattern passes in Kibana grok debugger.
This time my question is, if I add a tab instead of space in my simple data, how grok filter changes, to reflect tabulation ? thanks
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.