How to configure s3 input to scale?

We have 3 filebeat instances running with a s3/sqs input. From what I can tell each instance will only pull 10 sqs messages at one time. We are not bottlenecked on cpu or ram, so what do I need to configure for it to process more simultaneously?

My elastic output is:

  compression_level: 5
  worker: 30
  bulk_max_size: 500

You can scale Filebeat horizontally by starting multiple instances of them. See more:

Hi @Ronin the limit of 10 is because the AWS API call we are making ReceiveMessage to SQS. There is a parameter in this API limiting the maximum number of messages to return and the default is 10 in Filebeat S3 input.

In AWS API, this value is default to 1 and max to 10 so we can't go higher than 10 is my understanding:
The maximum number of messages to return. Amazon SQS never returns more messages than this value (however, fewer messages might be returned). Valid values: 1 to 10. Default: 1.

Please see for more info. Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.