my filebeat constantly (1-3 times a second) can't seem to connect to logstash and do it's thing. I'm on the docker images for 7.9.2
Steps i've already taken:
confirmed host is resolving.
confirmed I can ping logstash from filebeat
confirmed client_inactivity_timeout of 3000 doesn't fix it.
confirmed i can telnet from filebeat to logstash:5044
tried disabling the elastic output in logstash
tried turning up bulk_max_size to 4096
tried turning queue.mem events up to 4096
tried restarting both containers many many times
confirmed elasticsearch health is green
filebeat debug logs:
2020-10-19T21:32:40.485Z DEBUG [logstash] logstash/async.go:120 connect
2020-10-19T21:32:40.485Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2020-10-19T21:32:40.485Z INFO [publisher] pipeline/retry.go:223 done
2020-10-19T21:32:40.486Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(async(tcp://logstash:5044)) established
2020-10-19T21:32:40.543Z DEBUG [logstash] logstash/async.go:172 2048 events out of 2048 events sent to logstash host logstash:5044. Continue sending
2020-10-19T21:32:40.587Z DEBUG [transport] transport/client.go:205 handle error: read tcp 172.21.0.3:54968->172.21.0.7:5044: read: connection reset by peer
2020-10-19T21:32:40.587Z DEBUG [transport] transport/client.go:118 closing
2020-10-19T21:32:40.587Z ERROR [logstash] logstash/async.go:280 Failed to publish events caused by: read tcp 172.21.0.3:54968->172.21.0.7:5044: read: connection reset by peer
2020-10-19T21:32:40.587Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2020-10-19T21:32:40.587Z INFO [publisher] pipeline/retry.go:223 done
2020-10-19T21:32:40.645Z DEBUG [logstash] logstash/async.go:172 2048 events out of 2048 events sent to logstash host logstash:5044. Continue sending
2020-10-19T21:32:40.646Z DEBUG [logstash] logstash/async.go:128 close connection
2020-10-19T21:32:40.646Z ERROR [logstash] logstash/async.go:280 Failed to publish events caused by: client is not connected
2020-10-19T21:32:40.646Z DEBUG [logstash] logstash/async.go:128 close connection
2020-10-19T21:32:40.646Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2020-10-19T21:32:40.646Z INFO [publisher] pipeline/retry.go:223 done
2020-10-19T21:32:42.022Z ERROR [publisher_pipeline_output] pipeline/output.go:180 failed to publish events: client is not connected
2020-10-19T21:32:42.022Z INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(async(tcp://logstash:5044))
2020-10-19T21:32:42.022Z DEBUG [logstash] logstash/async.go:120 connect
^ repeats
logstash debug logs:
2020-10-19 21:32:08,496 defaultEventExecutorGroup-4-4 WARN The Logger slowlog.logstash.codecs.plain was created with the message factory org.apache.logging.log4j.spi.MessageFactory2Adapter@4ee7f563 and is now requested with a null message factory (defaults to org.logstash.log.LogstashMessageFactory), which may create log events with unexpected formatting.
2020-10-19 21:32:09,918 defaultEventExecutorGroup-4-5 WARN The Logger slowlog.logstash.codecs.plain was created with the message factory org.apache.logging.log4j.spi.MessageFactory2Adapter@4ee7f563 and is now requested with a null message factory (defaults to org.logstash.log.LogstashMessageFactory), which may create log events with unexpected formatting.
2020-10-19 21:32:11,780 defaultEventExecutorGroup-4-6 WARN The Logger slowlog.logstash.codecs.plain was created with the message factory org.apache.logging.log4j.spi.MessageFactory2Adapter@4ee7f563 and is now requested with a null message factory (defaults to org.logstash.log.LogstashMessageFactory), which may create log events with unexpected formatting.
2020-10-19 21:32:13,873 defaultEventExecutorGroup-4-7 WARN The Logger slowlog.logstash.codecs.plain was created with the message factory org.apache.logging.log4j.spi.MessageFactory2Adapter@4ee7f563 and is now requested with a null message factory (defaults to org.logstash.log.LogstashMessageFactory), which may create log events with unexpected formatting.
2020-10-19 21:32:15,166 defaultEventExecutorGroup-4-8 WARN The Logger slowlog.logstash.codecs.plain was created with the message factory
^ repeats
filebeat.yml:
filebeat.modules:
- module: system
syslog:
enabled: true
auth:
enabled: true
filebeat.inputs:
- type: container
enabled: true
paths:
-/var/lib/docker/containers/*/*.log
stream: all # can be all, stdout or stderr
filebeat.autodiscover:
providers:
- type: docker
# https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
# This URL alos contains instructions on multi-line logs
hints.enabled: true
#================================ Processors ===================================
processors:
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_locale:
format: offset
- add_host_metadata:
netinfo.enabled: true
output.logstash:
hosts: ["logstash:5044"]
bulk_max_size: 4098
#============================== Dashboards =====================================
#setup.dashboards:
# enabled: true
#============================== Kibana =========================================
#setup.kibana:
# host: "${KIBANA_HOST}"
#============================== Xpack Monitoring ===============================
xpack.monitoring:
enabled: true
elasticsearch:
hosts: ["http://elasticsearch:9200"]
logging.level: debug
logstash.conf:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
}
}
Any ideas? I'm out of troubleshooting ideas entirely.