Is there a way to write grok pattern such that it will automatically detect fields and parse it into elasticsearch. I am hoping to eliminate the need to constantly create grok patterns to recognize new log patterns
For example all the fields start with field_name = fieldvalue
src_ip = 10.1.1.223 src_port = 8080
So why would the Logstash contain a grok pattern for Juniper while the same can be achieved with kv filter? I am trying understand the rational of why it is there.
Is there a way to modify the KV filter such that its can process key/value in the form of "mykey:myvalue" instead of the existing form "mykey=myvalue"?
So why would the Logstash contain a grok pattern for Juniper while the same can be achieved with kv filter?
I don't know.
Is there a way to modify the KV filter such that its can process key/value in the form of "mykey:myvalue" instead of the existing form "mykey=myvalue"?
Yes. Please explore the various configuration options listed in the kv filter documentation.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.