Ingest error with logstash

Hi.
I've created a filebeat -> logstash -> elastic flow for iptables logs.
Filebeat uses the default iptables module.
Logstash has minimal config (beat input, elastic output, no filter).
The Logstash pipeline comes from the documentation.
Default Elastic ingests are installed.

When a message is sent via the above flow, I've got a dlq error message:

Could not index event to Elasticsearch. status: 400, action: ["create", {:_id=>nil, :_index=>"filebeat-8.10.3", :routing=>nil, :pipeline=>"filebeat-8.10.3-iptables-log-pipeline"}, {"fileset"=>{"name"=>"log"}, "log"=>{"file"=>{"path"=>"/var/log/firewall"
...
 response: {"create"=>{"_index"=>".ds-filebeat-8.10.3-2023.12.12-000025", "_id"=>"hYy5X4wBj3IF6fYTGKKe", "status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:65] failed to parse field [iptables.ether_type] of type [long] in document with id 'hYy5X4wBj3IF6fYTGKKe'. Preview of field's value: '08:00'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"08:00\""}}}}

The 08:00 is the hexa value of ether_type (comes from iptables log) and filebeat-8.10.3-iptables-log-pipeline's painless script should convert it to long value (2048).

Anyway the conversion works as expected if I test the message on Kibana Ingest Test or modify the flow to direct sending (filebeat -> elastic).

I guess that there is some difference in the behavior of ingest processing based on beat or logstash source, but I couldn't find any clue. Any idea?
Thanks.

I think I found the problem.
Logstash adds an event.orignal field to the data before send it to the ingest and iptables-ingest 3rd step is a Rename task (Renames "message" to "event.original") which throws an "already exits" error and stops the process.

When I set the "Ignore failures for this processor" option the processing worked as expected (maybe a condition should be more elegant).

Anyway the "ignore" and condition (ctx?.event?.original == null) are not the best, because both fields remain.
A failure handler (remove field) could be a solution, but I think it's ugly....

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.