Logstash map id always detected as long, even it string

Hi,

Based on logs below, I assume that my logstsh always consider any record with field named "id" as a "long", even it "string".

[WARN ][logstash.outputs.elasticsearch][main][f4110a8e1e28ebbac18f43191bfa4dea9a1b050d31aef7bd8b0e3e6aa490afbd] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"13.229.9.94-webmin-webmin-api-2020.08", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x4f3a5c0>], :response=>{"index"=>{"_index"=>"13.229.9.94-webmin-webmin-api-2020.08", "_type"=>"_doc", "_id"=>"Uze0DnQBbUX_aj8AhqWE", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [log_record.response.content_json.data.rows.id] of type [long] in document with id 'Uze0DnQBbUX_aj8AhqWE'. Preview of field's value: '5f0579cb793f846f26418f44'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"5f0579cb793f846f26418f44\""}}}}}

Any solution regarding those case? Please help me regarding this matter. Thanks in advance.

Hi there,

guess this is caused by the fact that you originally ingested some documents in a ES index with a long-alike log_record.response.content_json.data.rows.id field without applying to that field a specific mapping in ES to cast it to be a string. Now you're ingesting another document with 5f0579cb793f846f26418f44 as value of that log_record.response.content_json.data.rows.id which obviously is not a long. Therefore it returns an error saying you cannot cast 5f0579cb793f846f26418f44 to a long.

Since you cannot change the mapping of a field of an already filled index, what you can do depends on whether you can delete that index or not.

If it's a test index and you can delete it you erase the index, apply a template to the index (before ingesting any document) specifying you want log_record.response.content_json.data.rows.id to be ingested as a string and then you start ingesting the docs.

If you cannot delete the index because you don't want to lose any data, you can

  • create a temporary index specifying the mapping for that field
  • reindex the docs from the old index to this new temporary one making sure that field is now set as string
  • delete the old index with the wrong mapping
  • apply the right mapping to the right index (the original one which you just erased)
  • reindex back the old docs from the temporary index to the original index making sure that field has the right mapping
  • delete the temporary index
1 Like

Hi, thanks for the detail answer. Will try to do reindex.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.