Filebeat fails connection to Logstash on Kubernetes


I'm trying to connect a Filebeats deployment to Logstash but I am having error getting Filebeat to connect.

A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

I've pretty much exhausted all that I can think of when it comes to why this isn't working, so any ideas would be appreciated! I've included the relevant configs below.

Filebeats is on Server A
Logstash is on a Kubernetes Cluster

In the filebeat pipeline I have tried:

  # The Logstash hosts
  hosts: ["kubernetes_host:5044"]
  ssl.enabled: true 


  # The Logstash hosts
  hosts: ["kubernetes_host:443/logstash-ingress"]
  ssl.enabled: true 

And my logstash pipeline beats input looks like this:

input {
  beats {
      id => "beats-input"
      port => "5044"
      ssl => true
      ssl_certificate => "path_to_cert"
      ssl_key => "path_to_key"
      ssl_verify_mode => "none"

I for the logstash deployment on Kubernetes I am deploying using Helm & Ansible Playbooks.

values.yml contains:

          replicas: 1
          - name: beats
            containerPort: 5044
          clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s" # Elastic does not go into green status when using 1 replica
          - secretRef:
              name: logstash_creds
          # Configure Readiness & Liveness Probe to 
              path: /
              port: 9600
            initialDelaySeconds: 180
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
            successThreshold: 3

          # Configure Beats Service
            annotations: {}
            type: ClusterIP
            loadBalancerIP: ""
              - name: beats
                port: 5044
                protocol: TCP
                targetPort: 5044
          # Add TLS Certificate
            - name: ##
              secretName: ##
              path: path_to_certs

I have also added a ConfigMap for allowing a TCP connection with NGINX following this guide, and a ingress for when I was trying to use /logstash-ingress path.

I think the first step would be to determine if logstash is actually accessible on Port 5044, can you use telnet or a TCP connection tester to see if you can actually connect to that port on the logstash instance?


I've now tried using netcat to test the port from inside the cluster with an alpine deployment. I got the following results:

nc -z "logstash-service" 5044 returned a successful connection

`nc -z "logstash-pod-ip" 5044 returned a successful connection

I've also tried telnet from my device:

telnet "kubernetes-cluster"/logstash" and telnet "kubernetes-cluster":5044 both fail to connect.

I can, from any of the devices tried so far, connect to to the logstash api, port 9600, with "kubernetes-cluster"/logstash-api this succeeds.

I feel like this is more of an issue with me not understanding networking, than with filebeat or logstash :smiley:

I'm not too familiar with the setup you've got but it sounds like the port you're using that uses the built-in ingress is working fine and the port that's going through nginx is not and so maybe looking at the nginx config is the next step?

When it comes to kubernetes ingress with nginx I am familiar with the use of DNS/IP & Path. But I remember reading in another forum post here/stack overflow, that filebeat does not accept "host/path" as an output, only "host:port"?

This is what's causing the biggest headache for me!

I've tried connecting our filebeat kubernetes deployment to logstash.. it has connected and is sending logs..

output: "Logstash-Transfer-Service":5044
Transfer Service (Namespace B) -> Logstash Service (Namespace A)
Logstash Pod

This worked after I disabled SSL.

I think I have narrowed down the issues to:

NGINX Ingress

I will update you if I find a solution!

I've created a new ingress on kubernetes! This has allowed filebeat to connect to logstash.

K8s Ingress:

type or paste code hereapiVersion:
kind: Ingress
  name: filebeat
  ingressClassName: nginx
    - hosts:
        - "....devlogstash...."
      secretName: cluster-crt
    - host: "....devlogstash...."
        - path: /
          pathType: Prefix
              name: logstash-transfer
                number: 5044

This seems to work as it's the only ingress using this host so I don't need to use a path.. letting me use "DNS:443" in filebeat output.

This feels like more of a temporary fix, as it requires adding an additional DNS to the filebeat servers, when it would be preferable to use the original K8s DNS with a TCP ingress etc.

But I've not moved on to having an issue now with lumberjack protocol error:

2024-01-29T14:03:53.534Z	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(async(tcp://....devlogstash....:443))
2024-01-29T14:03:53.534Z	INFO	[publisher]	pipeline/retry.go:213	retryer: send wait signal to consumer
2024-01-29T14:03:53.534Z	INFO	[publisher]	pipeline/retry.go:217	  done
2024-01-29T14:03:53.545Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(async(tcp:// established
2024-01-29T14:03:53.545Z	INFO	[publisher]	pipeline/retry.go:213	retryer: send wait signal to consumer
2024-01-29T14:03:53.545Z	INFO	[publisher]	pipeline/retry.go:217	  done
2024-01-29T14:03:53.570Z	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2024-01-29T14:03:53.570Z	INFO	[publisher]	pipeline/retry.go:223	  done
2024-01-29T14:03:53.572Z	ERROR	[logstash]	logstash/async.go:280	Failed to publish events caused by: lumberjack protocol error
2024-01-29T14:03:53.572Z	INFO	[publisher]	pipeline/retry.go:213	retryer: send wait signal to consumer
2024-01-29T14:03:53.572Z	INFO	[publisher]	pipeline/retry.go:217	  done
2024-01-29T14:03:53.574Z	ERROR	[logstash]	logstash/async.go:280	Failed to publish events caused by: client is not connected

I'm sure other people have had that error, so I will go and look for solutions and hopefully be able to mark this solved :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.