Mapper_parsing_exception", "reason"=>"failed to parse [severity]"

full error:

[2018-04-20T00:33:26,274][WARN ][logstash.outputs.elasticsearch] Could not index event to 
Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.04.20-ea1", 
:_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x5a7f3243>], :response=>{"index"=> . 
{"_index"=>"logstash-2018.04.20-ea1", "_type"=>"doc", "_id"=>"KZJ44GIBORYB4ebWWD38", 
"status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [severity]", 
"caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"INFO\""}}}}}

my logstash-plain.log is exploding with this error hundreds of times per minute. I'm curious if someone could point me in the right direction on how to troubleshoot this issue.

I'm on ELK 6.2Preformatted text

I have the same Problem, but in another field:

[2018-04-20T03:36:57,778][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.04.20", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x47553b9>], :response=>{"index"=>{"_index"=>"logstash-2018.04.20", "_type"=>"doc", "_id"=>"g6Ig4WIBqfBa_dd2Xqus", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [source] tried to parse field [source] as object, but found a concrete value"}}}}

I'm on ELK 6.2 too

In my case, I just deleted the current index to solve error.

How has the severity field been mapped in the ES index? Use ES's get mapping API to find out.

Thanks for the replies gentlemen:

@magnusbaeck:

"severity":{"type":"long"},"severity_label":{"type":"text","norms":false,"fields":{"keyword": 
{"type":"keyword","ignore_above":256}}}, 

a basic curl resulted in the above ^

interestingly: ➜ curl ${ESEA1}/logstash-2018.04.19-ea1/_mapping/severity didn't quite work i had to curl the whole mapping endpoint and grep for severity.

adding to this in my /usr/share/logstash/patterns the only reference to severity is here:

##############################    cassandra     
############################################
CASSANDRA %{SYSLOG5424PRI}%{CISCOTIMESTAMP} %{DATA:HOSTNAME} cassandra_chum-log % . 
{DATA:severity} %{DATA} Compacted %{NUMBER:sstables:int} sstables to \[%{DATA:data_file}\].  % . 
{NUMBER:start_bytes:int} bytes to %{NUMBER:end_bytes:int} \(~%{NUMBER:percentage:int}\% of 
original\) in %{NUMBER:duration_int}ms = %{NUMBER:mb_sec:int}MB/s.  % . 
{NUMBER:start_partitions:int} total partitions merged to %{NUMBER:end_partitions:int}.  Partition 
merge counts were %{GREEDYDATA:merge_counts}

Okay, so for some reason severity had been mapped as an integer. You clearly can't store a string like "INFO" there. Decide which data type you want the field to be and reindex the data accordingly.

1 Like

@magnusbaeck

Don't know how to do that precisely. Any hints or links? The ELK stack was dropped into my lap and I'm kind of flying blind a bit.

Thanks!

What data type should severity be, an integer or a string?

Did you at some point store a document where severity was an integer? Was that intentional? Do you still have a need to index documents with an integer severity field?

Thanks for the reply. I cannot imagine it was ever an integer. That being said we are decommissioning our Cassandra ring this week so this is likely a non-issue going forward.

I appreciate the help from the community.

If you can delete the index and start from scratch things should sort out just fine, but otherwise you'll have to

  • set up an index template (modify the one that ships with Logstash) to map the severity field as a string,
  • reindex the data, for example by copying the index contents to a temporary index (use ES's reindex API) and then copying it back.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.