I'm taking data from sql server to elasticsearch 6.6 using logstash and jdbc. Some of my records are skipped by logstash printing these warning logs:
[2019-02-15T16:48:52,630][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"2111348540", :_index=>"order_tracking1", :_type=>"orderstracking_report", :_routing=>nil}, #LogStash::Event:0x14b8bcd8], :response=>{"index"=>{"_index"=>"order_tracking1", "_type"=>"orderstracking_report", "_id"=>"2111348540", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [forecast_voice_rgu] cannot be changed from type [long] to [float]"}}}}
....[2019-02-15T16:48:52,867][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"13625215", :_index=>"order_tracking1", :_type=>"orderstracking_report", :_routing=>nil}, #LogStash::Event:0x70e6f34c], :response=>{"index"=>{"_index"=>"order_tracking1", "_type"=>"orderstracking_report", "_id"=>"13625215", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [forecast_total_rgu] cannot be changed from type [long] to [float]"}}}}
........[2019-02-15T16:48:52,872][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"190215307182245", :_index=>"order_tracking1", :_type=>"orderstracking_report", :_routing=>nil}, #LogStash::Event:0x729a53dd], :response=>{"index"=>{"_index"=>"order_tracking1", "_type"=>"orderstracking_report", "_id"=>"190215307182245", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [forecast_total_rgu] cannot be changed from type [long] to [float]"}}}}
My 3 records are missing due to these errors. While the fields [forecast_total_rgu],[forecast_voice_rgu] are not long rather floats in my database having values upto 2 decimals. I don't know why logstash is skipping these records printing these logs while logstash doesn't support long datatype.
I just created an other test index and re-imported the same data. For this time it skipped 4 other record and imported previous records fairly. I don't understand this random behaviour of logstash.
If anybody can help.