How to configure s3 input to scale?

We have 3 filebeat instances running with a s3/sqs input. From what I can tell each instance will only pull 10 sqs messages at one time. We are not bottlenecked on cpu or ram, so what do I need to configure for it to process more simultaneously?

My elastic output is:

output.elasticsearch:
  compression_level: 5
  worker: 30
  bulk_max_size: 500

You can scale Filebeat horizontally by starting multiple instances of them. See more: https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-input-s3.html#_parallel_processing

Hi @Ronin the limit of 10 is because the AWS API call we are making ReceiveMessage to SQS. There is a parameter in this API limiting the maximum number of messages to return and the default is 10 in Filebeat S3 input.

In AWS API, this value is default to 1 and max to 10 so we can't go higher than 10 is my understanding:
The maximum number of messages to return. Amazon SQS never returns more messages than this value (however, fewer messages might be returned). Valid values: 1 to 10. Default: 1.

Please see https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html for more info. Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.