Regarding Pipeline Batch Sizing

We use beats to send most of our logs to logstash with the following parameters:

  • bulk_size: 2048
  • load_balanced: yes
  • workers: number of logstashes
  • pipelining: 2

In logstash we have 8 worker threads with 2 cores and 4GB HEAP. The question I have is does the internal batch sizing in logstash need to correlate with the bulk_size a beat might be sending? That is, does it need to be larger?