Logstash Failed to parse with all enclosed parsers

I want to use winlogbeat to monitor win10.
Logs transfer via logstash to elasticsearch.

   winlogbeat(7.9.3)->logstash(7.9.3)->elasticsearch(7.8.0)

After I started winlogbeat and logstash, I got no error from winlogbeat and following WARN from logstash.

   [WARN ] 2020-11-03 01:05:06.082 [[main]>worker1] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"winlogbeat-7.9.3-2020.11.01", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x708d7860>], :response=>{"index"=>{"_index"=>"winlogbeat-7.9.3-2020.11.01", "_type"=>"_doc", "_id"=>"7amziXUBbCOmbqcJwvHU", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [winlog.event_data.param1] of type [date] in document with id '7amziXUBbCOmbqcJwvHU'. Preview of field's value: 'svchost'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [svchost] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}

①How can fix this WARN? (winlogbeat uses default settings in winlogbeat.yml except output to logstash. logstash uses "input beat" and "output elasticsearch" without filter.)
②Will this WARN cause following 403 errors? (Assuming my network has enough bandwidth.)

    [WARN ] 2020-11-03 01:05:06.093 [[main]>worker1] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"winlogbeat-7.9.3-2020.11a.02", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0xb61b0aa>], :response=>{"index"=>{"_index"=>"winlogbeat-7.9.3-2020.11.02", "_type"=>"_doc", "_id"=>"UKmziXUBbCOmbqcJwvLU", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [winlog.event_data.param1] of type [date] in document with id 'UKmziXUBbCOmbqcJwvLU'. Preview of field's value: 'svchost'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [svchost] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
    [ERROR] 2020-11-03 01:05:06.198 [[main]>worker1] elasticsearch - Encountered a retryable error. Will Retry with exponential backoff  {:code=>403, :url=>"https://es:443/_bulk"}
    [WARN ] 2020-11-03 01:05:08.290 [[main]>worker1] elasticsearch - Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [https://elastic:xxxx@es:443/][Manticore::ClientProtocolException] es:443 failed to respond {:url=>https://elastic:xxxx@es:443/, :error_message=>"Elasticsearch Unreachable: [https://elastic:xxxx@es:443/][Manticore::ClientProtocolException] es:443 failed to respond", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
    [ERROR] 2020-11-03 01:05:08.291 [[main]>worker1] elasticsearch - Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [https://elastic:xxxx@es:443/][Manticore::ClientProtocolException] es:443 failed to respond", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>4}
    [ERROR] 2020-11-03 01:05:12.307 [[main]>worker1] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8}

③If I use Persistent Queues in logstash. Can I prevent data loss from these errors?
settings for logstash.yaml

queue.type: persisted
queue.drain: true

Thank you in advance.

OK, assuming you are using dynamic mapping and do not have a template then what happened is that in the first event that you indexed which contained winlog.event_data.param1, the value of that field got a positive result for date detection, so elasticsearch set the field type to date.

You could disable date detection (see link above), or add a template that forces the paramX fields to be keyword. Either way you will need to start over with a new index.

I do not think those 400 statuses will cause a 403.

Persistent queues will not help. A dead letter queue would allow you to save the event to disk, which may or may not be useful.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.