Mapper parsing exception - failed to parse field

Hello, could you please advise what this problem is related to and how to solve it?
Below is a snippet of the warning I am getting from logstash.
A little situational description, we have a large number of microservices on k8, they send logs to stdout where they are collected by filebeat and directed to logstash and then ES. The problem described below affects only 1 or 2% of the services. Of the remaining number of services, about 98% of the logs are delivered to ES without problem. It is a mystery to me where to look for the cause and what it could be.

[2022-03-24T11:59:22,689][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"json-k8-2022.03.24", :_type=>"_doc", :_routing=>nil}, #<LogStash::Event:0x55ed7bfa>], :response=>{"index"=>{"_index"=>"json-k8-2022.03.24", "_type"=>"_doc", "_id"=>"9APLu38Bt4zdiU7zsGs7", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [level] of type [long] in document with id '9APLu38Bt4zdiU7zsGs7'. Preview of field's value: 'INFO'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"INFO\""}}}}}

Hi @Robert777 , welcome!

Have you checked the data as it looks like a mapping issue

Do you mean checking the data (logs) that are generated in the service? Could you please elaborate further on your question?

Correct, checking the data and that the field with the issue is consistently of the correct type - LONG

I will try to check it out in the next few days.

In Elasticsearch you either created a mapping that tells it that [level] should be a number, or dynamic mapping has decided that the field should be long, due to the value in the first document indexed that included that field. It cannot parse "INFO" as a long.

@Badger
Do you think the value type for [level] to other than LONG type can be or should be changed in the microservice? It looks like ES somehow decides why it dynamically assigned the LONG type. Or what would be the preferred solution so that the change does not affect other logging services?
I would prefer the change to be possible at the service level so that it does not affect other service that are logging correctly into the same index.
I am not an expert on the subject and my questions may be imprecise, but I am looking for all possible advice.

@Robert777

Maybe change the field to string?

Or via Logstash do some mutations ?

@zx8086
Yes I'll investigate both suggestions.

@Robert777

When you have numbers and strings, like i do, i normalise the log.level this way, as an example

    if [log.level] =~ "1" 
      {
        mutate
          {
            replace =>
              {
                "log.level" => "Error"
              }
            }
        }

    if [log.level] =~ "2" 
      {
        mutate {
          replace => 
            {
              "log.level" => "Warning"
            }
        }
      }

    if [log.level] =~ "3" 
      {
        mutate {
          replace => 
            {
              "log.level" => "Info"
            }
        }
      }

    if [log.level] =~ "4" 
      {
        mutate {
          replace => 
            {
              "log.level" => "Trace"
            }
        }
      }

    if [log.level] =~ "5" 
      {
        mutate {
          replace => 
            {
              "log.level" => "File Event"
            }
        }
      }

  }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.