Logstash 7.3.1
I read from csv:
columns => ["field1","field2","field3","field4","field5","field6","field7","field8","field9","field10","field11","field12","CustomID","field13","field14"]
Then I want to enrich my docs by using translate on field CustomID:
translate {
field => "CustomID"
dictionary_path => "C:/path/path/path/translate.csv"
refresh_interval => 500
destination => "MAPPING"
fallback => "nan"
}
dissect {
mapping => {
"MAPPING" => "%{field1},%{field2},%{field3},%{field4},%{field5},%{field6},%{field7}"
}
}
My translate.csv file looks this:
"CustomID","field1,field2,field3,field4,field5,field6,field7"
Now in Logstash log I found following entries:
[2020-02-10T06:39:50,719][WARN ][org.logstash.dissect.Dissector] Dissector mapping, pattern not found
{"field"=>"MAPPING", "pattern"=>"%{field1},%{field2},%{field3},%{field4},%{field5},%{field6},%{field7}",
"event"=>{"afield1"=>"value", "host"=>"mymachine", "afield2"=>"value", "MAPPING"=>"nan", "afield3"=>"value", and few more fields...
"@version"=>"1", "message"=>"value1;value2;value3;value4;value5;value6;value7;value8;;value10,value11;value12;value13;value14;value15;value16\r", "@timestamp"=>2020-02-10T05:39:47.312Z, "CustomID"=>"123456", and so on ==> "tags"=>["_dissectfailure"]
Why the mapping failed "MAPPING"=>"nan" just because a field of my message have no entry:
"message"=> value8;;value10 (value9 isnt there)
But the "CustomID"=>"123456" is there and the mapping just should enrich my data by LookUp the CustomID not on field9.
Why this happens, that makes no sense to me.
Do the translater expect the existing of all fields and values in the message even if they are
"outside of the KEY field" and have nothing to do with the translate process itself?
Thanks for any help here because this looks like a major issue for me!
Brgds