The message is logged because the send buffers are full. The kafka client is waiting for ACK from kafka for an older batch in order to flush it's buffers. So network of kafka itself might create some backpressure. All in all data will not be lost, as filebeat will retry until there is enough space. Unfortunately the kafka client used by Beats can be somewhat chatty at times and doesn't differentiate between errors and debug messages.
worker count will increase the number of kafka clients being used by 3, each publishing events independently. Batches of events are load balanced among 3 client instances. But it depends on your queue settings how effective this is. If there is some quota/rate limiting in Kafka or intermediate network device configured you might not gain much, as your overall bandwidth might be limited.
A kafka client is not sending data to a broker, but to a topic. A topic is split into partitions. Each partition is owned by a broker, plus has replicas among other brokers. One broker is the leader of a partition at any time. The leader of a partition changes to another broker every now and so often. This means a kafka client connects to a cluster, not to a broker. The connection to the cluster is established using a bootstrap process by querying one of the initial configured brokers for the cluster metadata. Then a connection to each broker is established as required for serving the partitions of said topic. E.g. Having 10 brokers means you have 10 connections. Having 3 workers with 10 brokers means you will have 30 TCP connections.
Scaling in Kafka is normally established via the number of partitions for a topic (assuming you have no common 'bottleneck' like a NAT/firewall device between Filebeat and Kafka).