Hi all,
I'm sending data from a csv file to elasticsearch via logstash and everything is ok but I have a problem on an ip address field that contains some different values that doesn't conform to ipv4 format so I'am using a "gsub" processor to replace these values with a default ip address and then I'm using a "geoip" processor to locate these addresses using kibana visualization. I would like to use these two processors together in an ingest node pipeline and not with logstash filter plugins ( I tried to use "gsub" filter in my logstash config file and everything was ok but I want to do it with an ingest node pipeline because I will be sending other data from a hive table instead of a csv file ). So, I think the problem is that the "geoip" processor is applied before "gsub" processor.
How can apply "gsub" processor before "geoip" processor ?
Here is my pipeline:
{
"mypipeline": {
"processors": [
{
"gsub": {
"field": "ip_address",
"pattern": "\\(na|-1\\)",
"replacement": "0.0.0.0"
}
},
{
"geoip": {
"field": "ip_address"
}
}
]
}
}
And this the error while sending the data:
[INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 500 ({"type"=>"exception", "reason"=>"java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: 'na' is not an IP string literal.", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"java.lang.IllegalArgumentException: 'na' is not an IP string literal.", "caused_by" =>{"type"=>"illegal_argument_exception", "reason"=>"'na' is not an IP string literal."}}, "header"=>{"processor_type"=>"geoip"}})