I'm trying to set up a custom grok filter for my data input, but when I test it in Kibana's Grok Debugger, I only get the value for the first field (field1). I'm using grok instead of the csv parser because after field7 the last data field is varying length, and it should just be a single entry in Logstash (I will do some post processing on it afterwards).
I tried escaping the $ and | with \, but if I run %{INT:field1}\$ \|\$%{INT:field2}\$
or %{INT:field1}\$ \$%{INT:field2}\$
on my input, I get a "Provided Grok patterns do not match data in the input" error.
I also get the same error if I try: %{INT:field1}\$\|\$ %{INT:field2}\$\|\$ %{WORD:field3}\$\|\$ %{DATA:field4}\$\|\$ %{DATA:field5}\$\|\$ %{TZ:field6}\$\|\$ %{GREEDYDATA:field7}\$\|\$ %{GREEDYDATA:theRestOfIt}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.