Filebeat batch size taking default value and not changing

I am running a single Node Elastic Cluster on a Server . Filebeat is used as collector for netflow data . Further Filebeat is giving output to Elastic .
I am trying to improve the indexing rate in elastic by changing the bulk_size parameter in filebeat output .Following is the filebeat config for elastic output

output.elasticsearch:
   hosts: ["localhost:9200"]
   worker: 3
   bulk_max_size: 5000
   compression_level: 0

I have speicifed the bulk_max_size to be 5000 but still when I check the filebeat logs , The batch size is around 2048 which is default value . Here is the sample filebeat log for 30 s period

{"monitoring":{"metrics":{"beat":{"cgroup":{"cpuacct":{"total":{"ns":20437478801}}},"cpu":{"system":{"ticks":17984840,"time":{"ms":789}},"total":{"ticks":446731920,"time":{"ms":20438},"value":446731920},"user":{"ticks":428747080,"time":{"ms":19649}}},"handles":{"limit":{"hard":262144,"soft":1024},"open":18},"info":{"ephemeral_id":"b4be9b54-7732-4398-9048-bfe51e49791c","uptime":{"ms":657930141}},"memstats":{"gc_next":1186178784,"memory_alloc":955748312,"memory_total":68112226776816,"rss":1605677056},"runtime":{"goroutines":43}},"filebeat":{"events":{"active":-1828,"added":202252,"done":204080},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":202232,"packets":{"dropped":2182,"received":8213}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":204080,"active":43008,"batches":100,"total":204080},"read":{"bytes":1450897},"write":{"bytes":304444899}},"pipeline":{"clients":1,"events":{"active":8193,"published":202252,"total":202252},"queue":{"acked":204080}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":27.78,"5":27.44,"15":27.49,"norm":{"1":0.8681,"5":0.8575,"15":0.8591}}}}}}

As seen from the above logs batches of max size 2048 * 100 total number of batches =204080

Can someone please help me to understand Why the batch size is not changing ?TIA