I'm trying to connect a Filebeats deployment to Logstash but I am having error getting Filebeat to connect.
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
I've pretty much exhausted all that I can think of when it comes to why this isn't working, so any ideas would be appreciated! I've included the relevant configs below.
Filebeats is on Server A
Logstash is on a Kubernetes Cluster
In the filebeat pipeline I have tried:
output.logstash:
# The Logstash hosts
hosts: ["kubernetes_host:5044"]
ssl.enabled: true
and
# The Logstash hosts
hosts: ["kubernetes_host:443/logstash-ingress"]
ssl.enabled: true
And my logstash pipeline beats input looks like this:
I for the logstash deployment on Kubernetes I am deploying using Helm & Ansible Playbooks.
values.yml contains:
values:
replicas: 1
extraPorts:
- name: beats
containerPort: 5044
clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s" # Elastic does not go into green status when using 1 replica
envFrom:
- secretRef:
name: logstash_creds
# Configure Readiness & Liveness Probe to
readinessProbe:
httpGet:
path: /
port: 9600
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 3
# Configure Beats Service
service:
annotations: {}
type: ClusterIP
loadBalancerIP: ""
ports:
- name: beats
port: 5044
protocol: TCP
targetPort: 5044
# Add TLS Certificate
secretMounts:
- name: ##
secretName: ##
path: path_to_certs
I have also added a ConfigMap for allowing a TCP connection with NGINX following this guide, and a ingress for when I was trying to use /logstash-ingress path.
I think the first step would be to determine if logstash is actually accessible on Port 5044, can you use telnet or a TCP connection tester to see if you can actually connect to that port on the logstash instance?
I'm not too familiar with the setup you've got but it sounds like the port you're using that uses the built-in ingress is working fine and the port that's going through nginx is not and so maybe looking at the nginx config is the next step?
When it comes to kubernetes ingress with nginx I am familiar with the use of DNS/IP & Path. But I remember reading in another forum post here/stack overflow, that filebeat does not accept "host/path" as an output, only "host:port"?
This is what's causing the biggest headache for me!
I've tried connecting our filebeat kubernetes deployment to logstash.. it has connected and is sending logs..
output: "Logstash-Transfer-Service":5044
to
Transfer Service (Namespace B) -> Logstash Service (Namespace A)
to
Logstash Pod
This seems to work as it's the only ingress using this host so I don't need to use a path.. letting me use "DNS:443" in filebeat output.
This feels like more of a temporary fix, as it requires adding an additional DNS to the filebeat servers, when it would be preferable to use the original K8s DNS with a TCP ingress etc.
But I've not moved on to having an issue now with lumberjack protocol error:
2024-01-29T14:03:53.534Z INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(async(tcp://....devlogstash....:443))
2024-01-29T14:03:53.534Z INFO [publisher] pipeline/retry.go:213 retryer: send wait signal to consumer
2024-01-29T14:03:53.534Z INFO [publisher] pipeline/retry.go:217 done
2024-01-29T14:03:53.545Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(async(tcp://devlogstash.devs.facilities.rl.ac.uk:443)) established
2024-01-29T14:03:53.545Z INFO [publisher] pipeline/retry.go:213 retryer: send wait signal to consumer
2024-01-29T14:03:53.545Z INFO [publisher] pipeline/retry.go:217 done
2024-01-29T14:03:53.570Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2024-01-29T14:03:53.570Z INFO [publisher] pipeline/retry.go:223 done
2024-01-29T14:03:53.572Z ERROR [logstash] logstash/async.go:280 Failed to publish events caused by: lumberjack protocol error
2024-01-29T14:03:53.572Z INFO [publisher] pipeline/retry.go:213 retryer: send wait signal to consumer
2024-01-29T14:03:53.572Z INFO [publisher] pipeline/retry.go:217 done
2024-01-29T14:03:53.574Z ERROR [logstash] logstash/async.go:280 Failed to publish events caused by: client is not connected
I'm sure other people have had that error, so I will go and look for solutions and hopefully be able to mark this solved
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.