Logstash to Elasticsearch there is 10-20min for delay, also not all logs are indexed

I am sending logs from Logstash to Elasticsearch. The Elasticsearch seems to be working fine but there is 10-20mins of delay in logs index also getting below error on logstash, any help would be greatly appreciated.

[2023-06-16T14:36:50,078][WARN ][logstash.outputs.elasticsearch][main][044debbbdfef92ace7e72d6a3ec877042a9516199b68b4ae9f8ddd3c0a5bc642] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2023.06.16", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x42680064>], :response=>{"index"=>{"_index"=>"logstash-2023.06.16", "_type"=>"_doc", "_id"=>"VKujxIgBjcUFivZxMzvM", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date] in document with id 'VKujxIgBjcUFivZxMzvM'. Preview of field's value: '1686925083.1633325'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [1686925083.1633325] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}

I cannot speak to the delay without seeing more of the configuration, but I think the failure to parse '1686925083.1633325' as [strict_date_optional_time||epoch_millis] is because that time is in epoch_second, not epoch_millis.

1 Like

Just to add, I assume issues are: grok(especially if there is too many GREEDYDATA) and mapper_parsing_exception.

Please check if you have refresh interval set to your index which is causing the delay.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.