Communication issue between beats and logstash

We had deployed ECK:1.3.0 and deployed elasticsearch, kibana (7.9.0) and filebeat (7.9.2) . we had given file beat configuration as below:
Note: here we are deploying filebeat as a daemon set.

# Filebeat
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: {{ .Values.beats.name}}
spec:
  type: filebeat
  version: {{ .Values.beats.version}}
  image: "{{ .Values.image.repository }}/filebeat:7.9.2"
  kibanaRef:
    name: {{ .Values.kibana.name}}
  config:
    output.logstash:
      hosts: ["${LOGSTASH_SERVICE_HOST}:${LOGSTASH_SERVICE_PORT_BEATS}"]
      ssl.certificate_authorities: ["/mnt/elastic-internal/kibana-certs/ca.crt"]
      ssl.certificate: "/mnt/elastic-internal/kibana-certs/tls.crt"
      ssl.key: "/mnt/elastic-internal/kibana-certs/tls.key"
    filebeat.autodiscover.providers:
    - node: ${NODE_NAME}
      type: kubernetes
      cleanup_timeout: 120s
      #hints.default_config.enabled: "true"
      hints.enabled: "true"
      templates:
      - condition.equals.kubernetes.namespace: {{ .Values.namespace}}
        config:
        - paths: ["/var/log/containers/*${data.kubernetes.container.id}.log"]
          exclude_lines: ["^\\s+[\\-`('.|_]"]
          close_inactive: 1h
          type: container
      - condition.equals.kubernetes.labels.log-label: "true"
        config:
        - type: container
          paths: ["/var/log/containers/*${data.kubernetes.container.id}.log"]
          exclude_lines: ["^\\s+[\\-`('.|_]"]
          close_inactive: 1h    
    processors:
    - add_cloud_metadata: {}
    - add_host_metadata: {}
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: filebeat
        automountServiceAccountToken: true
        terminationGracePeriodSeconds: 30
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true # Allows to provide richer host metadata
        containers:
        - name: filebeat
          securityContext:
            runAsUser: 0
            # If using Red Hat OpenShift uncomment this:
            #privileged: true
          # specify resource limits and requests
          resources:
            requests:
              memory: {{ .Values.beats.resources.requestmemory}}
              cpu: {{ .Values.beats.resources.requestcpu}}
            limits:
              memory: {{ .Values.beats.resources.limitmemory}}
              cpu: {{ .Values.beats.resources.limitcpu}}
          volumeMounts:
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
          - mountPath: /mnt/elastic-internal/kibana-certs
            name: kibana-certs
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
        volumes:
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: kibana-certs
          secret:
            secretName: kibana-config-kb-http-certs-internal
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: <ns>

In Logstash, we are specifying input section as below:

inputs:
    main: |-
      input {
        beats {
          port => 5044
          ssl => true
          ssl_certificate_authorities => ["/etc/kibana/certificate/ca.crt"]
          ssl_certificate => "/etc/kibana/certificate/tls.crt"
          ssl_key => "/etc/kibana/certificate/tls.key"
          ssl_verify_mode => "peer"

Getting below error from filebeat pods:

 ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(async(tcp://IP:5044)): dial tcp IP:5044: connect: connection refused
2021-07-15T06:59:12.602Z        INFO    [publisher_pipeline_output]     pipeline/output.go:145  Attempting to reconnect to backoff(async(tcp://IP:5044)) with 1109 reconnect attempt(s)
2021-07-15T06:59:12.602Z        INFO    [publisher]     pipeline/retry.go:213   retryer: send wait signal to consumer
2021-07-15T06:59:12.602Z        INFO    [publisher]     pipeline/retry.go:217     done

we didn't find any issue except this error.

Can anyone please suggest on this.