Strange behavior of filebeat for small number of filtered events

Hi,
I am using filebeat to ship the logs that contained only the java exception stack traces in the LogFile.
To test the feasibility I ran the filebeat in the host machine and ELK stack on other server.
I ran the simulation with 13 files, having two scenario when the exceptional events are low and exceptional events are too high.

The maschine on which I ran the simulation has the following  configuration. 
4 CPUs(@ 3.0 Ghz) with 16 Gb Ram.

In the first run I ran filebeat with the following configuration

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    -  /work/filebeatTesting/dummyGenerator/LogFileGene/*/LOGS/*.log
  include_lines: ['\n']
  multiline.pattern: ^[0-9]
  multiline.negate: true
fields:
   env: finaldemo
   log_type: application
output.logstash:
  hosts: ["14.28.54.161:5044"]

The results shows the following behaviour.

Id      Scenario                          ExceptionalEvent        Time Taken        TotalEvents     MemoryUsage %
1.    Low Eception Count          6465                           ~13 minutes           18213997            5
2.   High Exception Count       709001                     ~4.5 minutes            15479214              5

As I am only interested in the Eceptional event I have fileterd out all the other events by using the filtering in the configuration file of the filebeat.

Although by tweaking the internal queue setting in the filebeat configuration file.

queue.mem:
  events: 4096
  flush.min_events: 200
  flush.timeout: 45s

I found the following result:
Id Scenario ExceptionalEvent Time Taken TotalEvents MemoryUsage%
1. Low Exception Count 6465 ~5 minutes 18213997 10
2. High Exception Count 709001 ~4.5 minutes 15479214 5

When going through the documentation I found

If the Beat sends single events, the events are collected into batches. If the Beat publishes a large batch of events (larger than the value specified by bulk_max_size), the batch is split.

So I am still wondering that In case if the beats are sending single event, so what will be the size of the batch, will it be 1 or some bigger number ?

  1. How the handling of the output and the internal queue takes place ?
  2. why memory usage is growing when I have small number of events to be dispatched ?

Thanks and Regards

This should explain this a bit:

#queue:
  # Queue type by name (default 'mem')
  # The memory queue will present all available events (up to the outputs
  # bulk_max_size) to the output, the moment the output is ready to server
  # another batch of events.
  #mem:
    # Max number of events the queue can buffer.
    #events: 4096

    # Hints the minimum number of events stored in the queue,
    # before providing a batch of events to the outputs.
    # A value of 0 (the default) ensures events are immediately available
    # to be sent to the outputs.
    #flush.min_events: 2048

    # Maximum duration after which events are available to the outputs,
    # if the number of events stored in the queue is < min_flush_events.
    #flush.timeout: 1s

As the flush timeout is 1s by default, the number of events sent together will be >=1 depending on how much data is read.

You mention bulk_max_size but it seems that you are using Logstash as output? Be aware not all outputs have the same configs.

If you have small batches which are sent out I assume the memory is growing because Filebeat is lagging behind and the queue is filling up. But for more details we would need to have a look at the debug log.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.