MAX TPS Achieved by file beat to Azure Event Hub : 600, anything more than that ends up in broken pipe

Hi,
I am using kafka sink more to write the logs using file beat to as azure event hub.
The maximum throughput I ever achieved is 700, anything beyond that just ends up broken pipes and my throughput drops drastically. I can't even work with more than 5 partitions.

I get the following messages in a loop

2019-07-24T18:49:36.370Z INFO kafka/log.go:53 client/metadata fetching metadata for [trident-32] from broker *.servicebus.windows.net:9093

2019-07-24T18:49:36.403Z INFO kafka/log.go:53 producer/leader/trident-32/22 selected broker 0

2019-07-24T18:49:36.403Z INFO kafka/log.go:53 producer/leader/trident-32/10 selected broker 0

2019-07-24T18:49:36.403Z INFO kafka/log.go:53 producer/broker/0 state change to [open] on trident-32/10

2019-07-24T18:49:36.403Z INFO kafka/log.go:53 producer/leader/trident-32/22 state change to [flushing-2]

2019-07-24T18:49:36.403Z INFO kafka/log.go:53 producer/leader/trident-32/10 state change to [flushing-2]

2019-07-24T18:49:36.403Z INFO kafka/log.go:53 producer/leader/trident-32/22 state change to [normal]

2019-07-24T18:49:36.503Z INFO kafka/log.go:53 producer/leader/trident-32/20 selected broker 0

2019-07-24T18:49:36.541Z INFO kafka/log.go:53 kafka message: Successful SASL handshake

My filebeat configuration is:

filebeat.inputs:

  • input_type: log
    paths:
    • /mnt/trident/logs/test*.log
      output.kafka:
      topic: trident-32
      required_acks: 1
      client_id: filebeat
      version: '1.0.0'
      hosts:
    • ".servicebus.windows.net:9093"
      ssl.enabled: true
      username: "$ConnectionString"
      password: "Endpoint=sb://
      .servicebus.windows.net/;SharedAccessKeyName=accessAll;SharedAccessKey=*"
      bulk_max_size: 10000
      worker: 33
      compression: none
      partition.round_robin:
      reachable_only: true
      logging.level: INFO
      processors:
  • decode_json_fields:
    fields: ["message"]
    process_array: false
    max_depth: 1
    target: "Trident"
    overwrite_keys: false

Is there anything I can change in my settings to keep the connection up and health without the broken pipe error.

2019-07-24T18:52:44.661Z INFO kafka/log.go:53 Connected to broker at *.servicebus.windows.net:9093 (registered as #0)

2019-07-24T18:52:44.742Z INFO kafka/log.go:53 producer/broker/0 state change to [closing] because write tcp 172.16.6.6:42942->40.112.242.0:9093: write: connection reset by peer

2019-07-24T18:52:44.742Z INFO kafka/log.go:53 Error while closing connection to broker *.servicebus.windows.net:9093: write tcp 172.16.6.6:42942->40.112.242.0:9093: write: broken pipe

2019-07-24T18:52:43.405Z INFO kafka/log.go:53 kafka message: Successful SASL handshake
2019-07-24T18:52:43.438Z INFO kafka/log.go:53 SASL authentication successful with broker *.servicebus.windows.net:9093:4 - [0 0 0 0]

2019-07-24T18:52:43.438Z INFO kafka/log.go:53 Connected to broker at *.servicebus.windows.net:9093 (registered as #0)

2019-07-24T18:52:43.546Z INFO kafka/log.go:53 producer/broker/0 state change to [closing] because write tcp 172.16.6.6:42922->40.112.242.0:9093: write: connection reset by peer

2019-07-24T18:52:43.546Z INFO kafka/log.go:53 Error while closing connection to broker *.servicebus.windows.net:9093: write tcp 172.16.6.6:42922->40.112.242.0:9093: write: broken pipe

The reset by peer look like the remote host close the connection.
How much TPS you get if you use the bulk_max_size and the worker because theses are pretty hight and would like to know the base performance.

Is there any TPS limits on the Azure event hub?

I could get a Max of 500 RPS. But but from last two days I am not able to get that either.
We tried separately with 400 cores to load the event hub ... Throttling started around 33000 TPS ...