Increase performance for a PubSub module in Filebeat

How to improve performance for the Google Cloud module on a filebeat? I'm using Google Cloud module running on a filebeat: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-googlecloud.html, and my event rate is super low.

Any thoughts how to tune it? Setting bulk_max_size or worker doesn't help at all.

I have set up this but it didn't help.

output.elasticsearch:
  bulk_max_size: 3200
  worker: 2

Hi @daniel_a, what does your config look like for filebeat?

Maybe @andrewkroh can help here :slightly_smiling_face:

cloud.id: "${CLOUD_ID}"
cloud.auth: "${CLOUD_AUTH}"

filebeat.config.modules:
  path: /etc/filebeat/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replicas: 1
  index.codec: best_compression

#flow__logs_template has 6 shards
setup.template.name: "flow__logs_template"
setup.template.overwrite: true

tags: ["flow_logs"]

#How to Tune Elastic Beats Performance: A Practical Example with Batch Size, Worker Count
 #  - https://www.elastic.co/blog/how-to-tune-elastic-beats-performance-a-practical-example-with- 
  # batch-size-worker-count-and-more

#set queue.mem.events to 2 * workers * batch size
queue.mem.events: 115200
#set queue.mem.flush.min_events to batch size
queue.mem.flush.min_events: 2400

output.elasticsearch:
   bulk_max_size: 2400
   worker: 24
   compression_level: 9

indices:
  - index: "vpc_flow_logs"
    when.contains:
      event.type: "flow"

++ @andrewkroh

I can barely get above 6,000 EPS/s with the filebeat and googlecloud module. The best I did was 6,487 EPS/s. Server CPU's utilization was about 25% :disappointed:

I started with the blog post: https://www.elastic.co/blog/how-to-tune-elastic-beats-performance-a-practical-example-with-batch-size-worker-count-and-more. Next, I set up the following configs (after testing many different ones first):

#set queue.mem.events to 2 * workers * batch size
queue.mem.events: 115200
#set queue.mem.flush.min_events to batch size
queue.mem.flush.min_events: 2400

output.elasticsearch:
   bulk_max_size: 2400
   worker: 24
   #this is optional, I tested it with 0, 5, and finally with 9
   compression_level: 9

I then looked at the googlecloud module (pubsub connection), and I added the following configs:

subscription.max_outstanding_messages: 120000
subscription.num_goroutines: 32

For the googlecloud module, I also tested different configs (large numbers or even -1), some of that was described in here https://github.com/googleapis/google-cloud-go/wiki/Fine-Tuning-PubSub-Receive-Performance#subscriptionreceivesettingsmaxoutstandingmessages).

And, I can't get anything above 6,487 EPS/s, any thoughts?

Is there a way to measure the filebeat performance after deploying each configuration change other than looking into the Kibana metric dashboard?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.