Grok pattern works but not applied on Kibana

Hello, I have an issue I try to fix for a couple of days, without success.

I need to grok the message field (they don't have the same syntax) from my Kibana but it fails.

I created my grok filter, which works on Grok Debugger :
%{HOSTNAME:hote}_%{DATA:val} %{WORD:process}\\[%{NUMBER:procces_pid}\\]: %{DATA:msg}: %{WORD:protocole} %{WORD:peer} %{IP:client} \\(%{DATA:AS}%{NUMBER:AS_nb}\\) %{DATA:etat} \\(%{DATA:evenement}\\) \\(%{DATA:instance}\\)
for this type of message :
ig1-edge-dc3-01_re0 rpd[1524]: RPD_BGP_NEIGHBOR_STATE_CHANGED: BGP peer 158.58.176.35 (External AS 200271) changed state from EstabSync to Established (event RsyncAck) (instance master). I created a pipeline for the grok, without any error but it doesn't parse the message.

Do you have any idea ? Thanks a lot.

Hi @Cuju,

just for clarification - is this a problem with Kibana or with the Logstash pipeline not working correctly? If Logstash is the problem here, please post this question in the Logstash forum.

Hi,

The issue is with Kibana, I don't have Logstash. I use Elasticsearch, Kibana and FIlebeat.

I created a pipeline for the grok, without any error but it doesn't parse the message.

Where and how are you creating your pipeline?

At first, I tried to create a pipeline :

PUT /_ingest/pipeline/filetest
{
  "description": "Pipeline for parsing Syslog messages.",
  "processors": [
    {
      "grok": {
        "ignore_missing": true,
        "field": "message",
        "patterns": [
          "%{HOSTNAME:hote}_%{DATA:val} %{WORD:process}\\[%{NUMBER:procces_pid}\\]: %{DATA:msg}: %{WORD:protocole} %{WORD:peer} %{IP:client} \\(%{DATA:AS}%{NUMBER:AS_nb}\\) %{DATA:etat} \\(%{DATA:evenement}\\) \\(%{DATA:instance}\\)"
        ]
      }
    }
  ]
}

but it didn't work, so I tried to add my grok to one of the preexistent pipelines (it doesn't have my grok in it) :

PUT /_ingest/pipeline/filebeat-7.5.2-system-syslog-pipeline
    {
                "description" : "Pipeline for parsing Syslog messages.",
                "processors" : [
                  {
                    "grok" : {
                      "pattern_definitions" : {
                        "GREEDYMULTILINE" : "(.|)*"
                      },
                      "ignore_missing" : true,
                      "field" : "message",
                      "patterns" : [
                        "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name} (?:\\[%{POSINT:process.pid:long}\\])?: %{GREEDYMULTILINE:system.syslog.message}",
                        "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}",
                        "%{TIMESTAMP_ISO8601:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\\[%{POSINT:process.pid:long}\\])?: %{GREEDYMULTILINE:system.syslog.message}",
                      ]
                    }
                  },
                  {
                    "rename" : {
                      "field" : "system.syslog.message",
                      "target_field" : "message",
                      "ignore_missing" : true
                    }
                  },
                  {
                    "date" : {
                      "formats" : [
                        "MMM  d HH:mm:ss",
                        "MMM dd HH:mm:ss",
                        "MMM d HH:mm:ss",
                        "ISO8601"
                      ],
                      "on_failure" : [
                        {
                          "append" : {
                            "field" : "error.message",
                            "value" : "{{ _ingest.on_failure_message }}"
                          }
                        }
                      ],
                      "if" : "ctx.event.timezone == null",
                      "field" : "system.syslog.timestamp",
                      "target_field" : "@timestamp"
                    }
                  },
                  {
                    "date" : {
                      "if" : "ctx.event.timezone != null",
                      "field" : "system.syslog.timestamp",
                      "target_field" : "@timestamp",
                      "formats" : [
                        "MMM  d HH:mm:ss",
                        "MMM dd HH:mm:ss",
                        "MMM d HH:mm:ss",
                        "ISO8601"
                      ],
                      "timezone" : "{{ event.timezone }}",
                      "on_failure" : [
                        {
                          "append" : {
                            "field" : "error.message",
                            "value" : "{{ _ingest.on_failure_message }}"
                          }
                        }
                      ]
                    }
                  },
                  {
                    "remove" : {
                      "field" : "system.syslog.timestamp"
                    }
                  }
                ],
                "on_failure" : [
                  {
                    "set" : {
                      "field" : "error.message",
                      "value" : "{{ _ingest.on_failure_message }}"
                    }
                  }
                ]
              }

and it also fails pitifully.

I think I have multiple issues to fix, but it is kind of confusing actually.

I see, this makes sense. You are creating these pipelines in Elasticsearch, so this isn't a Kibana related problem. Please post your question (along with the requests you used to create your pipelines) in the Elasticsearch forum.

Isn't the Dev Tools Console Kibana related ?

Yes, but the console is not causing your problem, it's just submitting the request. It's most likely something in the pipeline configuration or how your Elasticsearch indices are set up.

I can see how this is confusing because in the end Kibana is always present, but the Kibana category is meant for

All things about visualizing data in Elasticsearch & Logstash, including how to use Kibana and extending the platform.

Think about it this way - if you would submit the requests you posted from above via curl in your bash terminal on your linux machine (which you totally can) and the data wouldn't get ingested correctly, you wouldn't post in the curl forum or in the bash forum or in the linux forum - those tools were involved in the process, but they are not related to the problem you are having.

I hope this makes sense in terms determining what forum to choose. If the data is in your Elasticsearch, but Kibana won't display it or your visualizations are not showing up or a request works when done via curl but doesn't work when done in exactly the same way in the dev console - then Kibana is causing the problem and this is the right forum to ask for help.

The reason for the different categories is making it easier to find answers to questions already posted. If someone else is having a problem with ingest pipelines, they would head to the Elasticsearch forum, not the Kibana forum - and in this case I would be super happy if your question would show up, because it's worth being read again by someone with similar issues.

I recreated the post on the right channel. Thanks for the time and the explanation.

Link to the other post : Grok pattern works but not applied on Kibana (2nd)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.