Logstash output S3 performance tunning

I have logstash server read events from input kafka and output to S3.

My logstash pipeline have set pipeline.worker to 1 to set event in same order when write to file.

After change other config such as pipeline.batchsize, output s3 upload_workers_count, i monitor message comsume in kafka topic alway around 10k event per second regardless of my logstash pipeline config changes.

I dont know what cause event consume rate is the same. Any one can help me.

I want to tunning performance of pipeline send log to S3.

My logstash server have 8c32g.
logstashJavaOpts: "-Xmx22g -Xms22g"
This is my logstash pipeline config :

output.s3 
          rotation_strategy => "size"
          size_file => 104857600
          codec => json_lines
          encoding => gzip
          upload_workers_count => 16
          upload_queue_size => 2