[2018-04-20T00:33:26,274][WARN ][logstash.outputs.elasticsearch] Could not index event to
Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.04.20-ea1",
:_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x5a7f3243>], :response=>{"index"=> .
{"_index"=>"logstash-2018.04.20-ea1", "_type"=>"doc", "_id"=>"KZJ44GIBORYB4ebWWD38",
"status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [severity]",
"caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"INFO\""}}}}}
my logstash-plain.log is exploding with this error hundreds of times per minute. I'm curious if someone could point me in the right direction on how to troubleshoot this issue.
[2018-04-20T03:36:57,778][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.04.20", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x47553b9>], :response=>{"index"=>{"_index"=>"logstash-2018.04.20", "_type"=>"doc", "_id"=>"g6Ig4WIBqfBa_dd2Xqus", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [source] tried to parse field [source] as object, but found a concrete value"}}}}
interestingly: ➜ curl ${ESEA1}/logstash-2018.04.19-ea1/_mapping/severity didn't quite work i had to curl the whole mapping endpoint and grep for severity.
adding to this in my /usr/share/logstash/patterns the only reference to severity is here:
############################## cassandra
############################################
CASSANDRA %{SYSLOG5424PRI}%{CISCOTIMESTAMP} %{DATA:HOSTNAME} cassandra_chum-log % .
{DATA:severity} %{DATA} Compacted %{NUMBER:sstables:int} sstables to \[%{DATA:data_file}\]. % .
{NUMBER:start_bytes:int} bytes to %{NUMBER:end_bytes:int} \(~%{NUMBER:percentage:int}\% of
original\) in %{NUMBER:duration_int}ms = %{NUMBER:mb_sec:int}MB/s. % .
{NUMBER:start_partitions:int} total partitions merged to %{NUMBER:end_partitions:int}. Partition
merge counts were %{GREEDYDATA:merge_counts}
Okay, so for some reason severity had been mapped as an integer. You clearly can't store a string like "INFO" there. Decide which data type you want the field to be and reindex the data accordingly.
What data type should severity be, an integer or a string?
Did you at some point store a document where severity was an integer? Was that intentional? Do you still have a need to index documents with an integer severity field?
Thanks for the reply. I cannot imagine it was ever an integer. That being said we are decommissioning our Cassandra ring this week so this is likely a non-issue going forward.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.