What would be best way to bulk ingest netflow and zeek logs into Elasticsearch, which I would like to access in SIEM in Kibana? I am looking to ingest several TBs of logs. I plan to bulk ingest other pcap, auth logs and dns traffic. Can I bypass logstash and ingest directly into ElasticSearch?
Logstash is not necessary, you can do everything with Filebeat.
For Zeek is easy as you can point Filebeat's Zeek module to the log file(s).
For Netflow, if it's located in pcap files, is a bit more tricky as you need to replay the pcaps using an external program so that Filebeat's netflow module receives the packets via UDP, and make sure that you are not replaying them too fast as to cause packet loss.
For that there's going to be a bit of trial and error. For that it's really useful to enable netflow debugging (-d netflow
), and look at the stats it prints every second:
Stats total:[ packets=nnn dropped=nnn flows=nnn queue_len=nnnn ] delta:[ packets/s=nnn dropped/s=nnnn flows/s=nnn queue_len/s=nnn ]
You need to find a replay rate that makes sure dropped stays zero and the queue doesn't grow too much (The default queue size is 8192 packets, after that it starts dropping).
We should consider adding a way to ingest PCAP files for Netflow.
Also make sure to tune your Filebeat for maximum output performance: How to Tune Elastic Beats Performance: A Practical Example with Batch Size, Worker Count, and More | Elastic Blog
For replaying Netflow UDP I'm using GitHub - rigtorp/udpreplay: Replay UDP packets from a pcap file in loopback
mode.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.