Filebeat: logstash.pipelining: how does it work?

Hello,

The documentation being pretty succinct on the topic of logstash.pipelining parameter (and Google searches often return articles related to LS and ES pipelines), I'd like to know how these 3 parameters interact together:

  • logstash.pipelining
  • filebeat.spool_size
  • logstash.bulk_max_size

Should one assume that if pipelining is enabled, filebeat.spool_size must be equal to logstash.pipelining * logstash.bulk_max_size? Or in other words, what would be the size of one pipelined request "chunk" ?

Also, any recommendations for the value of logstash.pipelining?

Thanks,
MG

See this wikipedia article for getting an idea what pipelining is for.

Logstash internally uses some windowing, sarting with 10 events and exponentially growing up to bulk_max_size. If pipelining is enabled, the 'windowed' batches will be pipelined.

The spool_size (removed in 6.0 beta1) is the maximum number of events pushed to the output on flush. This batch is split into sub-batches of bulk_max_size, which are again split according to the current window size. In 6.0 we will remove the spooler, in favour of full asynchronous sends.

I found both, 3 and 5 for pipelining to improve throughput at times. Bigger values don't gain you much of an advantage.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.