Filebeat doesn't catchup with suricata eve.json

Hello ELK community,

I am fairly new to the subject ELK stack, I am trying to setup an IDS with suricata and ELK, the initial setup went pretty good, but I realized that the suricata events from eve.json are not getting in time to elasticsearch? So basically in the first 20-30 minutes everything looks fine, but then the data seem not to make it in time to elasticsearch, so the Last 15 minutes filter in Kibana shows nothing, then I switch to the Last 30 minutes filter until it shows nothing any more and so on.

I deleted the suricata eve.json content and restarted suricata and filebeat, unfortunately the same behavior occurs 20-30 minutes later, where is packetbeat seems to do the job perfectly.
Maybe someone can direct me in the right direction.

Thanks

Here is my config:

/etc/filebeat/filebeat.yml:

filebeat.inputs:

- type: log
  enabled: false
  paths:
    - /var/log/*.log

- type: filestream
  enabled: false
  paths:
    - /var/log/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true

setup.template.settings:
  index.number_of_shards: 1

setup.kibana:
  host: "192.168.1.41:5601"

output.elasticsearch:
  hosts: ["192.168.1.41:9200"]
  pipeline: geoip-info

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

logging.level: info

/etc/filebeat/modules.d/suricata.yml:

# Module: suricata
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-suricata.html

- module: suricata
  # All logs
  eve:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/suricata/eve.json"]

/etc/elasticsearch/elasticsearch.yml:

cluster.name: suricata
node.name: suricata01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node

/etc/kibana/kibana.yml:

server.port: 5601
server.host: "192.168.1.41"
elasticsearch.hosts: ["http://192.168.1.41:9200"]

While the issue here could be Filebeat itself, it is more likely that Elasticsearch can't keep up with the volume of data being sent and back pressure is preventing Filebeat from sending data faster.

What are the specs of your Elasticsearch cluster?

Hi Rob,

I am running ELK and suricata on the same hardware, it is a Dell PowerEdge 620:

  • CPU: 40 CPU's Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
  • RAM: 152 GB
  • Disk: 2 TB SSD's

What is the JVM size allocated to Elasticsearch?
With the amount of RAM you have, you should set heap size 31g

What is the ingest rate? (you can see this in Stack Management)
Your CPUs are from 2014, so while you have 20 real cores, they aren't the most powerful cores. Since you also need headroom to query data, this server will likely not be suitable for event rates exceeding 6-8K/sec. Even less if Suricata is on the same server and processing that much network traffic.

Related to event rates, you may need to increase the worker setting for Filebeat's elasticsearch output. This defaults to 1, which will limit you to only a couple thousand events per second. Try setting it to 4 and see if that helps keep up with the eve log file.

output.elasticsearch:
  hosts: ["192.168.1.41:9200"]
  pipeline: geoip-info
  worker: 4

BTW, the E5-2660 v2 is a 10-core processor, so with 2 sockets, you have 20-cores, not 40. Hyperthreading does help, but it is in the area of 10%. For sizing purposes only REAL cores matter. Related to this, in the cloud vCPUs include hyperthreads. So when a workload needs 8 real cores, you would need to give it 16 vCPUs in the cloud.

Indeed, I never touched the heap size default setting, so it is probably still the default, I just increased it to 31g in jvm.options file and increased the filebeat workers to 4. Thanks for the hint. Where can I find the ingest rate exactly? Sorry but I couldn't find it!

Well yes the hardware is pretty old, but it should be enough for our scenario, it is just acting as an IDS for specific and very few protocols at the moment, but for more we need to have something different.

You need to have monitoring enabled and then go to Stack Monitoring in Kibana. Then find the index where the data is being written and you will find a chart for Ingest Rate.

index rate is 1,052.97 /s

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.