Hi.
I've created a filebeat -> logstash -> elastic flow for iptables logs.
Filebeat uses the default iptables module.
Logstash has minimal config (beat input, elastic output, no filter).
The Logstash pipeline comes from the documentation.
Default Elastic ingests are installed.
When a message is sent via the above flow, I've got a dlq error message:
Could not index event to Elasticsearch. status: 400, action: ["create", {:_id=>nil, :_index=>"filebeat-8.10.3", :routing=>nil, :pipeline=>"filebeat-8.10.3-iptables-log-pipeline"}, {"fileset"=>{"name"=>"log"}, "log"=>{"file"=>{"path"=>"/var/log/firewall"
...
response: {"create"=>{"_index"=>".ds-filebeat-8.10.3-2023.12.12-000025", "_id"=>"hYy5X4wBj3IF6fYTGKKe", "status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:65] failed to parse field [iptables.ether_type] of type [long] in document with id 'hYy5X4wBj3IF6fYTGKKe'. Preview of field's value: '08:00'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"08:00\""}}}}
The 08:00 is the hexa value of ether_type (comes from iptables log) and filebeat-8.10.3-iptables-log-pipeline's painless script should convert it to long value (2048).
Anyway the conversion works as expected if I test the message on Kibana Ingest Test or modify the flow to direct sending (filebeat -> elastic).
I guess that there is some difference in the behavior of ingest processing based on beat or logstash source, but I couldn't find any clue. Any idea?
Thanks.