Autodiscovering logstash nodes in Kubernetes

Configmap for Filebeat:

kind: ConfigMap
metadata:
  name: filebeat
data:
  config: |-
    filebeat.inputs:
    - type: log
      paths: ["${FILEPATH}"]
      tags: ["${NAME}"]
    #================================ General =====================================
    name: "K8S - ${NAME}"
    #================================ Outputs =====================================
    # Configure what output to use when sending the data collected by the beat.
    #----------------------------- Logstash output --------------------------------
    output.logstash:
      # The Logstash hosts
      hosts: "${LOGSTASH}"
      loadbalance: true
      compression_level: 3
      bulk_max_size: 8192
    queue.mem:
      events: 65536
    processors:
    - add_host_metadata: {}

I did not specify workers, cause I found out that if I increase the resource limits on the FileBeat container, it increases the throughput accordingly. I guess it scales similar to how it would if it was installed on the machine. So, if I limit it to 2000m CPU it will by default use 2 workers. I may be way off, but the workers setting is pointless if there are resource limits defined in my deployment.

The logstash image has the relevant pipelines baked into it, governed by a CI-CD pipeline.