Filebeat grok processor with pipe delimiter not working


(Blutswende) #1

Hi all,

i'm writing a new module to parse our own logfiles.

Now i'm stuck at a problem. With parsing the pipeline.json

My pipeline.json looks like:

{
    "processors": [
      {
        "grok": {
          "field": "message",
          "patterns": ["%{DATA:SAP_Order.BUKRS}\|%{DATA:SAP_Order.LIFNR}\|%{DATA:SAP_Order.LIFRE}\|%{DATA:SAP_Order.EBELN}\|%{DATA:SAP_Order.EBELP}\|%{DATA:SAP_Order.MATNR}\|%{DATA:SAP_Order.TXZ01}\|%{DATA:SAP_Order.IDNLF}\|%{DATA:SAP_Order.EAN11}\|%{DATA:SAP_Order.OREMG}\|%{DATA:SAP_Order.MEINS}\|%{DATA:SAP_Order.NETPR}\|%{DATA:SAP_Order.WAERS}\|%{DATA:SAP_Order.PEINH}\|%{DATA:SAP_Order.BPRME}\|%{DATA:SAP_Order.BPUMN}\|%{DATA:SAP_Order.MEINS2}\|%{DATA:SAP_Order.BPUMZ}\|%{DATA:SAP_Order.BPRME2}\|%{DATA:SAP_Order.WEBRE}"],
          "ignore_missing": true
        }
      }
    ]
  }

And the grok debugger is working with the sample data:

0001|CPD_A||4500001239|00020|000000000000000023|Abfluss-Frei|||1,000 |ST|10,00 |EUR|1 |ST|1 |ST|1 |ST||

But when i load my module in filebeat there is an error while parsing the pipeline.json because there is an error how i setup up the pipe delimiter \| that is not a valid json.

It's a valid JSON when i delete the backslash in front of the PIPE, but then GROK debugger is not working as expected.

Is there a way to use gsub with GROK and FILEBEAT to replace the PIPE with a SEMICOLON?
How can i setup grok to use the PIPE delimiter and my pipeline.json is valid?

Best Regards
Florian


(Jaime Soriano) #2

Hi @blutswende,

Try to escape the pipe delimiter with a double backslash (\\|).


(Blutswende) #3

\\| doesn't work either

with the double backslash it's valid json string but the grok processor does not work

that's the output of the grok debugger in kibana

{
  "SAP_Order": {
    "WEBRE": ""
  }
}

WEBRE is the last field in row to parse

thanks


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.