Grok on logstash not working

Hi,

I am monitoring CPU metric threshold of containers in our infra using elasticsearch using Connector as "Server Log" which defaults to kibana.log.

I have created the following logstash configuration file to intercept the messages from kibana.log.

input {
    file {
        type => "json"
        codec => "json"
        path => "/var/log/kibana/kibana.log"
        start_position => "beginning"
    }
}

filter {
   grok{
      match => [
         "message", "Server log: CP1 containers CPU metric threshold - %{WORD:container_id} is in a state of %{DATA:sev_level} %{DATA:container_message} for %{GREEDYDATA:logMessage}"
        ]
   }
}


output {
  elasticsearch { hosts => ["xxxxx"]
  index => "docker-logs-%{+xxxx.ww}"
 }
  stdout { codec => rubydebug }
}

When I debug the logstash .conf file, I see the logs getting ingested as expected :

{
           "host" => {
        "name" => "infra-elasticvm-02"
    },
    "transaction" => {
        "id" => "177d25f28df3bd5c"
    },
           "type" => "json",
     "logMessage" => "Server log: CP1 containers CPU metric threshold - 52e5c413c5eba2b161ade239aa124d2041d5bdca76deb5c5260b6023ac608464 is in a state of ALERT CPU usage is 17% in the last 1 min for 52e5c413c5eba2b161ade239aa124d2041d5bdca76deb5c5260b6023ac608464. Alert when > 10%.;",
          "trace" => {
        "id" => "cf9f5500d60596ec76700cf3f8a53b56"
    },
     "@timestamp" => 2023-05-24T06:11:28.414Z,
       "@version" => "1",
            "log" => {
        "logger" => "plugins.actions",
         "level" => "ERROR",
          "file" => {
            "path" => "/var/log/kibana/kibana.log"
        }
    },
            "ecs" => {
        "version" => "8.0.0"
    },
          "event" => {
        "original" => "{\"ecs\":{\"version\":\"8.0.0\"},\"@timestamp\":\"2023-05-24T02:11:28.414-04:00\",\"message\":\"Server log: CP1 containers CPU metric threshold - 52e5c413c5eba2b161ade239aa124d2041d5bdca76deb5c5260b6023ac608464 is in a state of ALERT CPU usage is 17% in the last 1 min for 52e5c413c5eba2b161ade239aa124d2041d5bdca76deb5c5260b6023ac608464. Alert when > 10%.;\",\"log\":{\"level\":\"ERROR\",\"logger\":\"plugins.actions\"},\"process\":{\"pid\":366220},\"trace\":{\"id\":\"cf9f5500d60596ec76700cf3f8a53b56\"},\"transaction\":{\"id\":\"177d25f28df3bd5c\"}}"
    },
        "process" => {
        "pid" => 366220
    },
        "message" => "Server log: CP1 containers CPU metric threshold - 52e5c413c5eba2b161ade239aa124d2041d5bdca76deb5c5260b6023ac608464 is in a state of ALERT CPU usage is 17% in the last 1 min for 52e5c413c5eba2b161ade239aa124d2041d5bdca76deb5c5260b6023ac608464. Alert when > 10%.;"
}

However, I don't see the "container_id" or "container_message" fields neither in stdout nor in kibana dashboard. Surprisingly, the "Discover" tab is not even showing _grokparsefailure, rather it is able to parse the message and just displaying the "logMessage" field replicating the complete message available in "message" field.

I would like to know if I am doing something wrong here.

Thanks,
Nitish

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.