Dropped Netflow packets in filebeat

Hi,

I'm ingesting Netflow traffic using filebeat's netflow module (for the first time), and I think there are dropped packets. I'm wondering if there is anything I can do to reduce or eliminate dropped packets.

I started filebeat using filebeat -e so I can see the stats on my screen. The final stats are as follows.

"filebeat": {
   "events": {
      "added": 430866,
      "done": 430866
   },
   "harvester": {
      "open_files": 0,
      "running": 0
   },
   "input": {
      "netflow": {
         "flows": 430866,
         "packets": {
            "dropped": 854349,
            "received": 710962
         }
      }
   }
}

I'm assuming since dropped is non-zero, I'm losing some of my netflow traffic. I also see libbeat.pipeline.queue.max_events=4096. I've set var.queue_size to 8192 in netflow.yml, but I still see libbeat.pipeline.queue.max_events=4096 in the stats after restarting filebeat.

My server has 48 cores and 125GB of memory. I've set my heap memory to 64GB for ElasticSearch (not sure if that helps). I'm also using HDD instead of SSD, which I know limits my IO performance.

What other things can I do to minimize the dropped packets?

Thank you!

1 Like

Using HDDs is most like a limiter on the Elasticsearch side. We use ElastiFlow - the new one - and the guy that created it has a video that shows how bad HDDs are for Elasticsearch.

We also tried filebeat, but ElastiFlow had much better throughput and more features.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.