Hi all,
i'm writing a new module to parse our own logfiles.
Now i'm stuck at a problem. With parsing the pipeline.json
My pipeline.json looks like:
{
"processors": [
{
"grok": {
"field": "message",
"patterns": ["%{DATA:SAP_Order.BUKRS}\|%{DATA:SAP_Order.LIFNR}\|%{DATA:SAP_Order.LIFRE}\|%{DATA:SAP_Order.EBELN}\|%{DATA:SAP_Order.EBELP}\|%{DATA:SAP_Order.MATNR}\|%{DATA:SAP_Order.TXZ01}\|%{DATA:SAP_Order.IDNLF}\|%{DATA:SAP_Order.EAN11}\|%{DATA:SAP_Order.OREMG}\|%{DATA:SAP_Order.MEINS}\|%{DATA:SAP_Order.NETPR}\|%{DATA:SAP_Order.WAERS}\|%{DATA:SAP_Order.PEINH}\|%{DATA:SAP_Order.BPRME}\|%{DATA:SAP_Order.BPUMN}\|%{DATA:SAP_Order.MEINS2}\|%{DATA:SAP_Order.BPUMZ}\|%{DATA:SAP_Order.BPRME2}\|%{DATA:SAP_Order.WEBRE}"],
"ignore_missing": true
}
}
]
}
And the grok debugger is working with the sample data:
0001|CPD_A||4500001239|00020|000000000000000023|Abfluss-Frei|||1,000 |ST|10,00 |EUR|1 |ST|1 |ST|1 |ST||
But when i load my module in filebeat there is an error while parsing the pipeline.json because there is an error how i setup up the pipe delimiter \|
that is not a valid json.
It's a valid JSON when i delete the backslash in front of the PIPE, but then GROK debugger is not working as expected.
Is there a way to use gsub with GROK and FILEBEAT to replace the PIPE with a SEMICOLON?
How can i setup grok to use the PIPE delimiter and my pipeline.json is valid?
Best Regards
Florian