JSON format Decode error

I am using :

input {
file {
        id => "my_lt_log"
        path => "/logs/logtransformer.log"
        type => "log"
        start_position => "beginning"
      }
}
filter {
if [type] == "log" {
        mutate {
            remove_field => [ "kubernetes"]
        }
        mutate {
          gsub => [ "message", "(\W)at(\W)", '\1""\2' ]
        }
        if [message][metadata][proc_id] {
            mutate {
              add_field => { "[metadata][proc_id]" => "%{[message][metadata][proc_id]}" }
            }
            }
        if "_jsonparsefailure" in [tags] {
          mutate {
            add_field => {
              "logplane" => "adp-app-logs"
              "abc" => "%{[message]}"
            }
            remove_field => [ "message", "kubernetes" ]
          }
        }
        else {
          mutate {
              rename => {
                "path" => "filename"
              }
              add_field => {
                "def" => "%{[message]}"
                "logplane" => "adp-app-logs"
                "version" => "%{[message][version]}"
                "severity" => "%{[message][severity]}"
                "service_id" => "%{[message][service_id]}"
                "[metadata][container_name]" => "%{[message][metadata][container_name]}"
                "[metadata][node_name]" => "%{[message][metadata][node_name]}"
                "[metadata][namespace]" => "%{[message][metadata][namespace]}"
                "[metadata][pod_name]" => "%{[message][metadata][pod_name]}"
                "[metadata][pod_uid]" => "%{[message][metadata][pod_uid]}"
                "message" => "%{[message][message]}"
                "timestamp" => "%{[message][timestamp]}"
              }
            }
            mutate {
                  remove_field => [ "type", "host", "message", "kubernetes" ]
              }
            }
      }
}
output {
...
}

Output in Elastic Search:

{
        "_index" : "adp-app-logs-2023.02.03",
        "_type" : "_doc",
        "_id" : "PpiDF4YBCMtUNdxoMJFW",
        "_score" : 0.79323065,
        "_source" : {
          "filename" : "/logs/logtransformer.log",
          "metadata" : {
            "pod_name" : "%{[message][metadata][pod_name]}",
            "container_name" : "%{[message][metadata][container_name]}",
            "node_name" : "%{[message][metadata][node_name]}",
            "namespace" : "%{[message][metadata][namespace]}",
            "pod_uid" : "%{[message][metadata][pod_uid]}"
          },
          "timestamp" : "%{[message][timestamp]}",
          "version" : "%{[message][version]}",
          "@version" : "1",
          "service_id" : "%{[message][service_id]}",
          "def" : "{\"version\": \"1.1.0\", \"timestamp\": \"2023-02-03T13:41:43.034Z\", \"severity\": \"info\", \"service_id\": \"eric-log-transformer\", \"metadata\" : {\"namespace\": \"zyadros\", \"pod_name\": \"eric-log-transformer-7b64896976-s6h5r\", \"node_name\": \"node-10-63-142-135\", \"pod_uid\": \"336c9706-41a9-41c0-b459-2eb4e9f6e2b4\", \"container_name\": \"logtransformer\"}, \"message\": \"Starting pipeline {:pipeline_id=>'opensearch', 'pipeline.workers'=>2, 'pipeline.batch.size'=>2048, 'pipeline.batch.delay'=>50, 'pipeline.max_inflight'=>4096, 'pipeline.sources'=>['/opt/logstash/resource/searchengine.conf'], :thread=>'#<Thread:0x7649ae47 run>'}\"}",
          "@timestamp" : "2023-02-03T13:41:58.044976Z",
          "logplane" : "adp-app-logs",
          "severity" : "%{[message][severity]}"
        }
      }

How will i get the values and also decode the message in 'def'?

OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

1 Like

You should use a json filter to parse the JSON. With your current configuration you will never have a _jsonparsefailure tag. Also, fields like [message][metadata][proc_id] are never going to exist if you do not add them.

I already used json filter earlier but getting multiple errors like 'Error Parsing JSON'. The configuration mentioned above is the correct till now... Can you please give any example?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.