Failed to start filebeat_ drop_fields processors

good day

I have a problem when I start filebeat I get the following error:

and this is my configuration in filebeat.yml

# ================================= Processors =================================
processors:
#  - add_host_metadata:
 #     when.not.contains.tags: forwarded
 # - add_cloud_metadata: ~
 # - add_docker_metadata: ~
 # - add_kubernetes_metadata: ~
 - drop_fields:
      when:
       network:
        observer.ip: '10.252.132.138'
      fields: ["agent.name", "agent.hostname", "agent.type", "destination.locality"]
      ignore_missing: true

I have a processor to drop certain netflow fields that it sends me, since it is too much information that it sends me to elastic and I only need it to send me specific fields to the condition that I apply to an IP observer, that is to say that it sends me data of only that IP.

for that reason I kindly ask for your help to solve this case because it is important, and this can be useful to more people who have the same problem, when I want to drop fields and send data from a specific ip and also send only the fields I need.

thank you I hope your prompt response with this. :frowning:

Hi @Juan_David_Jaramillo !
Are you sure that the problem is actually with drop_fields processor? Does filebeat start, when this processor is commented out? Could you please provide filebeat logs?

that's right, I have commented the line of code of "drop_fields" and it works correctly, but when I want to run it with the "drop_fields" active it doesn't execute filebeat and I get the previous error

Hi @Juan_David_Jaramillo !

The error:

ERROR	instance/beat.go:1015	Exiting: Failed to start crawler: starting input failed: Error while initializing input: invalid CIDR address: 10.252.132.138
failed to parse CIDR, values must be an IP address and prefix length, like '192.0.2.0/24' or '2001:db8::/32', as defined in RFC 4632 and RFC 4291.

you should use:

when:
  network:
    observer.ip: '10.252.132.138/32'

thank you very much for your answer, I have a question, is there a way to get the fields that I only need, since dropping fields is more tedious because of the amount that comes out of netflow, and the ones I need to send are few that I already have mapped, but my question is if there is any filter to specify which fields to send to Elasticsearch? try with the filter "prune - whitelist" but it did not work for my case :frowning:

did you try include_fields processor?

thank you very much that was just what I needed!

input {
  beats {
    port  => 5044
    add_field => { "Tipo" => "netflow-test" }
  }
}

#filter {
#if [observer][ip] == "10.20.248.34" {
#       mutate {
#               remove_field => [[source][locality],[source][packet]]
#}}}

#filter {
 #   prune {
  #      interpolate => true
   #     whitelist_names => [ "[source][ip]", "[observer][ip]$" ]
   # }
#}

#filter {
 #   mutate {
  #      remove_field => [ "%{@version}","%{@type}","%{agent}","%{netflow}","%{network}","%{fields}"]
  #  }
#}
#filter {
 #   mutate {
  #      remove_field => ["[agent][id]", "[agent][hostname]", "[agent][name]", "[agent][type]", "[agent][version]"]
#       remove_field => ["[cloud][account][id]", "[cloud][availability_zone]", "[cloud][instance][id]", "[cloud][instance][name]", "[cloud][machine][type]", "[cloud][project][id]", "[cloud][provider]", "[cloud][servi>
#       remove_field => ["[destination][locality]", "[destination][port]", "[ecs][version]", "[event][action]", "[event][category]", "[event][created]" ,"[event][dataset]", "[event][duration]", "[event][end]", "[even>
#       remove_field => ["[fileset][name]", "[input][type]", "[netflow][exporter][address]", "[netflow][exporter][source_id]", "[netflow][exporter][timestamp]", "[netflow][exporter][uptime_millis]", "[netflow][export>
#       remove_field => ["[agent][ephemeral_id]", "[event][created]", "[event][end]", "[event][start]", "[flow][id]", "[flow][locality]", "[related][ip]", "[service][type]", "[source][bytes]", "[source][locality]", ">
#       remove_field => [ "[netflow][bgp_destination_as_number]", "[netflow][bgp_next_hop_ipv4_address]", "[netflow][bgp_source_as_number]", "[netflow][destination_ipv4_address]", "[netflow][destination_ipv4_prefix_l>
 #   }
#}
#filter {
#mutate { add_field => { "[@metadata][source]" => "%{[source][ip]}" "[@metadata][observer]" => "%{[observer][ip]}" } }
 #   prune {
 #       whitelist_names => [ "@timestamp", "host" ]
 #       add_field => { "[source][ip]" => "%{[@metadata][source]}" "[observer][ip]" => "%{[@metadata][observer]}" }
 #
#}
#}

before I had made a filter to delete fields from logstash because the "drop_fields" didn't work, but apparently netflow has too many fields that I don't need and it also generates new fields, so it was very difficult to consider.

thank you very much for your help!

See you soon!