Filebeat netflow module reject document with status 400

Hello,

I've updated filebeat, elasticsearch and kibana from the version 8.12.2 to 8.13.0 on ubuntu. After that i get an error when the stack is trying to ingest netflow.

filebeat[2101]: {"log.level":"debug","@timestamp":"2024-03-30T15:34:11.368Z","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails","file.name":"elasticsearch/client.go","file.line":455},"message":"Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2024, time.March, 30, 15, 34, 1, 0, time.UTC), Meta:{\"pipeline\":\"filebeat-8.13.0-netflow-log-pipeline\"}, Fields:{\"agent\":{\"ephemeral_id\":\"03a72809-448e-480e-b803-107a716d60b5\",\"id\":\"75009873-0b59-468c-af43-a32121fbc9f4\",\"name\":\"tv\",\"type\":\"filebeat\",\"version\":\"8.13.0\"},\"destination\":{\"ip\":\"13.109.185.170\",\"locality\":\"external\",\"port\":443},\"ecs\":{\"version\":\"1.12.0\"},\"event\":{\"action\":\"netflow_flow\",\"category\":[\"network\"],\"created\":\"2024-03-30T15:34:02.010420583Z\",\"dataset\":\"netflow.log\",\"duration\":0,\"end\":\"2024-03-30T14:33:30.341Z\",\"kind\":\"event\",\"module\":\"netflow\",\"start\":\"2024-03-30T14:33:30.341Z\",\"type\":[\"connection\"]},\"fileset\":{\"name\":\"log\"},\"flow\":{\"id\":\"fs64I72dWmc\",\"locality\":\"external\"},\"input\":{\"type\":\"netflow\"},\"netflow\":{\"destination_ipv4_address\":\"13.109.185.170\",\"destination_transport_port\":443,\"egress_interface\":0,\"exporter\":{\"address\":\"192.168.1.1:55242\",\"source_id\":0,\"timestamp\":\"2024-03-30T15:34:01Z\",\"uptime_millis\":12494239,\"version\":9},\"flow_end_sys_up_time\":8863580,\"flow_start_sys_up_time\":8863580,\"ingress_interface\":0,\"ip_class_of_service\":0,\"ip_version\":4,\"octet_delta_count\":83,\"packet_delta_count\":1,\"protocol_identifier\":6,\"source_ipv4_address\":\"192.168.1.226\",\"source_transport_port\":57936,\"tcp_control_bits\":24,\"type\":\"netflow_flow\"},\"network\":{\"bytes\":83,\"community_id\":\"1:21jePJZ+BagWDCl5Gcgjfvs8UME=\",\"direction\":\"unknown\",\"iana_number\":6,\"packets\":1,\"transport\":\"tcp\"},\"observer\":{\"ip\":\"192.168.1.1\"},\"related\":{\"ip\":[\"13.109.185.170\",\"192.168.1.226\"]},\"service\":{\"type\":\"netflow\"},\"source\":{\"bytes\":83,\"ip\":\"192.168.1.226\",\"locality\":\"internal\",\"packets\":1,\"port\":57936},\"tags\":[\"forwarded\"]}, Private:interface {}(nil), TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:mapstr.M(nil)}} (status=400): {\"type\":\"document_parsing_exception\",\"reason\":\"[1:191] failed to parse field [destination.ip] of type [ip] in document with id 'IcL_j44BMPHi30hENqqB'. Preview of field's value: '13'\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"'13' is not an IP string literal.\"}}, dropping event!","service.name":"filebeat","ecs.version":"1.6.0"}

I've tried deleting and recreating the data stream, related indexes and index templates. Then i recreate the index, datastream and index template using

filebeat setup --index-management

i still get the error.

I've also tried to post the document trough the dev console using

POST filebeat-8.13.0/_doc
{"@timestamp":"2024-03-30T14:33:30.341Z", "agent":{"ephemeral_id":"03a72809-448e-480e-b803-107a716d60b5","id":"75009873-0b59-468c-af43-a32121fbc9f4","name":"tv","type":"filebeat","version":"8.13.0"},"destination":{"ip":"13.109.185.170","locality":"external","port":443},"ecs":{"version":"1.12.0"},"event":{"action":"netflow_flow","category":["network"],"created":"2024-03-30T15:34:02.010420583Z","dataset":"netflow.log","duration":0,"end":"2024-03-30T14:33:30.341Z","kind":"event","module":"netflow","start":"2024-03-30T14:33:30.341Z","type":["connection"]},"fileset":{"name":"log"},"flow":{"id":"fs64I72dWmc","locality":"external"},"input":{"type":"netflow"},"netflow":{"destination_ipv4_address":"13.109.185.170","destination_transport_port":443,"egress_interface":0,"exporter":{"address":"192.168.1.1:55242","source_id":0,"timestamp":"2024-03-30T15:34:01Z","uptime_millis":12494239,"version":9},"flow_end_sys_up_time":8863580,"flow_start_sys_up_time":8863580,"ingress_interface":0,"ip_class_of_service":0,"ip_version":4,"octet_delta_count":83,"packet_delta_count":1,"protocol_identifier":6,"source_ipv4_address":"192.168.1.226","source_transport_port":57936,"tcp_control_bits":24,"type":"netflow_flow"},"network":{"bytes":83,"community_id":"1:21jePJZ+BagWDCl5Gcgjfvs8UME=","direction":"unknown","iana_number":6,"packets":1,"transport":"tcp"},"observer":{"ip":"192.168.1.1"},"related":{"ip":["13.109.185.170","192.168.1.226"]},"service":{"type":"netflow"},"source":{"bytes":83,"ip":"192.168.1.226","locality":"internal","packets":1,"port":57936},"tags":["forwarded"]}

And this one work.

Other modules are working fine.

I'm not sure where to look to debug the issue from there.

Thank

Hello,

I did try to apt purge elasticsearch kibana filebeat. Then proceed to a reinstall, i was still getting error.

So i tried the elastic-agent. The messages i get aren't parsed right again. They get through this time, but they are in the wrong format.

{
  "_index": ".ds-logs-netflow.log-default-2024.04.01-000001",
  "_id": "bENym44BZjnCou9CEw95",
  "_version": 1,
  "_score": 0,
  "_ignored": [
    "netflow.source_ipv4_address",
    "related.ip",
    "netflow.destination_ipv4_address",
    "source.ip",
    "destination.ip",
    "netflow.ip_next_hop_ipv4_address"
  ],
  "_source": {
    "agent": {
      "name": "tv",
      "id": "ce025a88-8bf6-4fbc-b5a2-0805f98eea72",
      "ephemeral_id": "d287c2bd-8b5d-4a86-aed7-3c36e5a1ecf0",
      "type": "filebeat",
      "version": "8.13.0"
    },
    "destination": {
      "port": 43764,
      "ip": [
        188,
        73,
        230,
        70
      ],
      "locality": "external"
    },
    "elastic_agent": {
      "id": "ce025a88-8bf6-4fbc-b5a2-0805f98eea72",
      "version": "8.13.0",
      "snapshot": false
    },
    "source": {
      "port": 46346,
      "bytes": 652,
      "ip": [
        102,
        151,
        218,
        125
      ],
      "locality": "external",
      "packets": 905
    },
    "error": {
      "message": [
        "array in field [source.ip] should only contain strings"
      ]
    },
    "network": {
      "community_id": "1:+AgPmWFSN+E6ixhi6sawfm/jlI8=",
      "bytes": 652,
      "transport": "tcp",
      "type": "ipv4",
      "packets": 905,
      "iana_number": "6",
      "direction": "external"
    },
    "tags": [
      "netflow",
      "forwarded"
    ],
    "input": {
      "type": "netflow"
    },
    "observer": {
      "ip": [
        "192.168.1.120"
      ]
    },
    "netflow": {
      "source_ipv4_prefix_length": 7,
      "destination_ipv4_prefix_length": 23,
      "packet_delta_count": 905,
      "protocol_identifier": 6,
      "bgp_destination_as_number": 4817,
      "flow_start_sys_up_time": 3270,
      "octet_delta_count": 652,
      "egress_interface": 0,
      "bgp_source_as_number": 61997,
      "type": "netflow_flow",
      "ip_next_hop_ipv4_address": [
        200,
        112,
        44,
        25
      ],
      "destination_ipv4_address": [
        188,
        73,
        230,
        70
      ],
      "source_ipv4_address": [
        102,
        151,
        218,
        125
      ],
      "exporter": {
        "engine_type": 1,
        "uptime_millis": 3478,
        "address": "192.168.1.120:41459",
        "engine_id": 0,
        "version": 5,
        "sampling_interval": 0,
        "timestamp": "2024-04-01T20:55:20.505Z"
      },
      "tcp_control_bits": 0,
      "ip_class_of_service": 0,
      "ingress_interface": 0,
      "flow_end_sys_up_time": 3291,
      "source_transport_port": 46346,
      "destination_transport_port": 43764
    },
    "@timestamp": "2024-04-01T20:55:20.505Z",
    "related": {
      "ip": [
        [
          102,
          151,
          218,
          125
        ],
        [
          188,
          73,
          230,
          70
        ]
      ]
    },
    "ecs": {
      "version": "8.11.0"
    },
    "data_stream": {
      "namespace": "default",
      "type": "logs",
      "dataset": "netflow.log"
    },
    "_tmp_": {
      "observer": {
        "ip": "192.168.1.120"
      }
    },
    "event": {
      "duration": 21000000,
      "agent_id_status": "auth_metadata_missing",
      "ingested": "2024-04-01T20:55:28Z",
      "created": "2024-04-01T20:55:20.506Z",
      "kind": "pipeline_error",
      "start": "2024-04-01T20:55:20.297Z",
      "action": "netflow_flow",
      "end": "2024-04-01T20:55:20.318Z",
      "category": [
        "network"
      ],
      "type": [
        "connection"
      ],
      "dataset": "netflow.log"
    },
    "flow": {
      "locality": "external",
      "id": "YJKcFoUD7Mk"
    }
  },
  "fields": {
    "flow.id": [
      "YJKcFoUD7Mk"
    ],
    "elastic_agent.version": [
      "8.13.0"
    ],
    "event.category": [
      "network"
    ],
    "netflow.exporter.sampling_interval": [
      0
    ],
    "netflow.ip_class_of_service": [
      0
    ],
    "netflow.source_transport_port": [
      46346
    ],
    "netflow.tcp_control_bits": [
      0
    ],
    "netflow.exporter.version": [
      5
    ],
    "netflow.exporter.address": [
      "192.168.1.120:41459"
    ],
    "netflow.bgp_source_as_number": [
      61997
    ],
    "netflow.destination_ipv4_prefix_length": [
      23
    ],
    "agent.name": [
      "tv"
    ],
    "network.community_id": [
      "1:+AgPmWFSN+E6ixhi6sawfm/jlI8="
    ],
    "event.agent_id_status": [
      "auth_metadata_missing"
    ],
    "event.kind": [
      "pipeline_error"
    ],
    "source.packets": [
      905
    ],
    "network.packets": [
      905
    ],
    "netflow.flow_start_sys_up_time": [
      3270
    ],
    "flow.locality": [
      "external"
    ],
    "netflow.source_ipv4_prefix_length": [
      7
    ],
    "input.type": [
      "netflow"
    ],
    "data_stream.type": [
      "logs"
    ],
    "tags": [
      "netflow",
      "forwarded"
    ],
    "agent.id": [
      "ce025a88-8bf6-4fbc-b5a2-0805f98eea72"
    ],
    "source.port": [
      46346
    ],
    "ecs.version": [
      "8.11.0"
    ],
    "event.created": [
      "2024-04-01T20:55:20.506Z"
    ],
    "network.iana_number": [
      "6"
    ],
    "agent.version": [
      "8.13.0"
    ],
    "event.start": [
      "2024-04-01T20:55:20.297Z"
    ],
    "observer.ip": [
      "192.168.1.120"
    ],
    "netflow.type": [
      "netflow_flow"
    ],
    "netflow.exporter.engine_id": [
      0
    ],
    "destination.port": [
      43764
    ],
    "netflow.bgp_destination_as_number": [
      4817
    ],
    "netflow.flow_end_sys_up_time": [
      3291
    ],
    "event.end": [
      "2024-04-01T20:55:20.318Z"
    ],
    "netflow.octet_delta_count": [
      652
    ],
    "agent.type": [
      "filebeat"
    ],
    "event.module": [
      "netflow"
    ],
    "netflow.ingress_interface": [
      0
    ],
    "netflow.packet_delta_count": [
      905
    ],
    "network.bytes": [
      652
    ],
    "elastic_agent.snapshot": [
      false
    ],
    "netflow.exporter.engine_type": [
      1
    ],
    "network.direction": [
      "external"
    ],
    "network.type": [
      "ipv4"
    ],
    "netflow.exporter.uptime_millis": [
      3478
    ],
    "source.bytes": [
      652
    ],
    "destination.locality": [
      "external"
    ],
    "elastic_agent.id": [
      "ce025a88-8bf6-4fbc-b5a2-0805f98eea72"
    ],
    "data_stream.namespace": [
      "default"
    ],
    "netflow.destination_transport_port": [
      43764
    ],
    "netflow.exporter.timestamp": [
      "2024-04-01T20:55:20.505Z"
    ],
    "source.locality": [
      "external"
    ],
    "network.transport": [
      "tcp"
    ],
    "event.duration": [
      21000000
    ],
    "netflow.protocol_identifier": [
      6
    ],
    "event.action": [
      "netflow_flow"
    ],
    "event.ingested": [
      "2024-04-01T20:55:28.000Z"
    ],
    "@timestamp": [
      "2024-04-01T20:55:20.505Z"
    ],
    "error.message": [
      "array in field [source.ip] should only contain strings"
    ],
    "data_stream.dataset": [
      "netflow.log"
    ],
    "event.type": [
      "connection"
    ],
    "agent.ephemeral_id": [
      "d287c2bd-8b5d-4a86-aed7-3c36e5a1ecf0"
    ],
    "_tmp_.observer.ip": [
      "192.168.1.120"
    ],
    "event.dataset": [
      "netflow.log"
    ],
    "netflow.egress_interface": [
      0
    ]
  },
  "ignored_field_values": {
    "source.ip": [
      102,
      151,
      218,
      125
    ],
    "netflow.destination_ipv4_address": [
      188,
      73,
      230,
      70
    ],
    "netflow.source_ipv4_address": [
      102,
      151,
      218,
      125
    ],
    "related.ip": [
      102,
      151,
      218,
      125,
      188,
      73,
      230,
      70
    ],
    "netflow.ip_next_hop_ipv4_address": [
      200,
      112,
      44,
      25
    ],
    "destination.ip": [
      188,
      73,
      230,
      70
    ]
  }
}

As we can see, the ip get split in an array of number, one by one.

Is there a way to know why this happens? Is there any file or misconfig that might have hanged in there?

@metie Welcome to the community.

YEah you got something weird ... perhaps that is a bug

Did you update the pipeline?

you should run the following to load everything

filebeat setup -e

@metie can you provide a few raw netflow events?

Can you share you modules.d/netflow.yml

I think we need to see them... provide 2 or 3 so we can look...

Is your netflow a supported version?

Netflow Integration

This integration is for receiving NetFlow and IPFIX flow records over UDP. It supports NetFlow versions 1, 5, 6, 7, 8 and 9, as well as IPFIX. For NetFlow versions older than 9, fields are mapped automatically to NetFlow v9.

For more information on Netflow and IPFIX, see:

Hello,

I tried the

filebeat setup

command multiple time, but it's not working.

Here is my very basic netflow.yml:

# Module: netflow
# Docs: https://www.elastic.co/guide/en/beats/filebeat/main/filebeat-module-netflow.html

- module: netflow
  log:
    enabled: true
    var:
      netflow_host: 0.0.0.0
      netflow_port: 2055
      # internal_networks specifies which networks are considered internal or private
      # you can specify either a CIDR block or any of the special named ranges listed
      # at: https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html#condition-network
      internal_networks:
        - private

For debugging the netflow, I'm using nflow-generator from this repo: https://github.com/nerdalert/nflow-generator/tree/master, which i built. In my setup, I'm using softflowd on a openwrt router. The export version is set to 9.

If that is not enough, i'll provide the tcpdump of a few netflow later.

I'm really thinking about reformatting this whole thing, as my guess is that i didn't update correctly the stack.

Hello,

I've proceeded with a full reformat of the disk, installed debian this time and installed and configured the elastic stack.

I'm still getting error 400 when i get netflow in...

At this point, i think something is wrong with the deb package, since i seem to be the only one with this problem.

Thanks.

What does that mean it is not working... it take a long time are you waiting for it to be finished?

filebeat setup -e will show when it is done.

Try downloading the filebeat tar.gz version and give it a try.

If I get a chance, I will try the generator.

Hello,

Sorry for the lack of details, I'm running filebeat setup until it complete. I then validate the creation of index template + data stream + index + ingest pipeline through kibana stack management.

I can then receive syslog data at a different udp port, but when the netflow come through udp 2055, the filebeat logs show that it can't index event. Here is the line:

Apr 02 13:57:11 tv filebeat[597]: {"log.level":"warn","@timestamp":"2024-04-02T13:57:11.590Z","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails","file.name":"elasticsearch/client.go","file.line":454},"message":"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.","service.name":"filebeat","ecs.version":"1.6.0"}

Following that i can enable debug log to see the error that is on the first post.

I'm currently trying the tar version of filebeat. I will report if this work or not.

Thank you!

So with your / the generator I am seeing the same with all my local installed of 8.13

{"log.level":"warn","@timestamp":"2024-04-02T07:38:09.246-0700","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails","file.name":"elasticsearch/client.go","file.line":454},"message":"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2024-04-02T07:38:09.246-0700","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails","file.name":"elasticsearch/client.go","file.line":454},"message":"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.","service.name":"filebeat","ecs.version":"1.6.0"}

When I look at the raw message I see this

"destination": { "ip": [ 62, 49, 32, 135 ],

Coming from the generator... that is no good... I think you have bad source data

I will look a bit closer..

Hello,

Just to clarify the generator is not my code, i meant to say that i built it from source using the instruction.

I've tried the filebeat from tar, which i executed using ./filebeat -c /etc/filebeat/filebeat.yml -e.

Since the data from the generator might be wrong, i did a tcpdump of the incoming netflow. It is located http://gros.chat/partage/netflow.pcap. I then proceed to export it using softflowd -r netflow.pcap -n tv:2055 from my router.

The warning with the status=400 then appears.

Is there a way to downgrade the installation to 8.12.2 ?

Thank you

You can downgrade just filebeat, Elasticsearch and Kibana cannot be downgraded.

To downgrade it depend on how you installed, if you used rpm or deb, then you need to look on how to downgrade using the package mangement of your system.

I would recommend that you downgrade filebeat to 8.12.2, looking at the release notes for 8.13 there is a breaking change related to the netflow input.

  • Convert netflow input to API v2 and disable event normalisation. 37901

Not sure exactly what was changed, but it is a breaking changing.

Also, 8.13.1 was released today, one week after 8.13.0, but there are no release notes for it yet.

1 Like

Hello,

I've succesfully downgraded the filebeat to 8.12.2. I confirm this resovle my problem.

Thanks a lot for your time.

I would also suggest that you open an issue because this is a breaking change and it is not clear how the user should proceed.

Also the log you shared from when using the Elastic Agent has a wrong parsing, maybe this change is broke? Not sure.

1 Like

I opened this issue: https://github.com/elastic/beats/issues/38703

Hope everything is right.

1 Like

@metie Thanks for filing that.
@leandrojmp Nice Catch!

And I just ran with Filebeat 8.12.1 + Elasticsearch 8.13.0 + generator and it works... as well for validation

I am experiencing the same issue reported here with the formatting of IP addresses in the Netflow module of the Elastic Agent, even after upgrading to version 8.13.2. The IPs are arriving fragmented into arrays, such as ["0.0.0.10", "0.0.0.190", "0.0.0.181", "0.0.0.90"], instead of being presented in a proper unified format.

As a temporary workaround, I implemented a processing script in the processor of the Elastic Agent configured via Fleet. The script joins the parts of the IP to restore the appropriate format. Here is the script I am using:

- script:
    lang: javascript
    source: >
      function process(event) {
        var fields = ["netflow.source_ipv4_address", "netflow.destination_ipv4_address", "netflow.ip_next_hop_ipv4_address", "related.ip", "source.ip", "destination.ip"];
        
        for (var i = 0; i < fields.length; i++) {
          var ipParts = event.Get(fields[i]);
          if (ipParts instanceof Array && ipParts.length === 4) {
            var completeIp = ipParts.join(".");
            event.Put(fields[i], completeIp);
          }
        }
      }

I hope this can help others facing the same issue. It would be great to hear if anyone has found a definitive solution or if there is any fix planned for an upcoming update.

Hello,

There is this pull request that resolve the issue. I don't know about the release cycle, but it is getting fix for sure.

Thanks.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.