I have a simple test-setup where I send data to Logstash via Filebeat. Both are in Version 7.5.2. All processes run on the localhost. On Logstash I pipe the output to stdout.
I am sending contents of log-files via Filebeat. The input seems to arrive fine at Logstash and gets printed to stdout. I get one data-package per line in the .log-file.
The problem is that the last message seems to be resend infinitely. My stdout keeps printing it. This behaviour persists across restarts of the services.
The filebeat logs show some i/o timeout message (see at the bottom).
When checking the message flow with wireshark, I see that the connections seems to be working in both directions. Filebeat pushes the data to Logstash and Logstash sends an "ACK" to Filebeat. After a while, Filebeat sends a "FIN" message. Logstash acknowledges this message and then resets the connection.
My configs are as follows.
Logstash
input {
beats {
port => 9200
}
}
filter {
}
output {
stdout { codec => rubydebug { metadata => true } }
}
Filebeat(almost the default/example)
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- d:\Tools\elastic\testdata\*.log
### Multiline options
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#================================ Outputs =====================================
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:9200"]
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
filebeat.log (abbreviated for readibility)
2020-01-28T10:15:45.651+0100 DEBUG [processors] processing/processors.go:186 Publish event: {
[...]
2020-01-28T10:15:45.673+0100 DEBUG [registrar] registrar/registrar.go:404 Registry file updated. 2 states written.
2020-01-28T10:15:45.673+0100 DEBUG [registrar] registrar/registrar.go:356 Processing 1 events
2020-01-28T10:15:45.673+0100 DEBUG [input] file/states.go:68 New state added for e:\testing\testdata\mylog.log
2020-01-28T10:15:45.673+0100 DEBUG [registrar] registrar/registrar.go:326 Registrar state updates processed. Count: 1
2020-01-28T10:15:45.673+0100 DEBUG [registrar] registrar/registrar.go:411 Write registry file: E:\Testing\filebeat-7.5.2-windows-x86_64\data\registry\filebeat\data.json (3)
2020-01-28T10:15:45.707+0100 DEBUG [registrar] registrar/registrar.go:404 Registry file updated. 3 states written.
2020-01-28T10:15:45.707+0100 DEBUG [registrar] registrar/registrar.go:356 Processing 1 events
2020-01-28T10:15:45.707+0100 DEBUG [registrar] registrar/registrar.go:326 Registrar state updates processed. Count: 1
2020-01-28T10:15:45.707+0100 DEBUG [state] file/states.go:118 State removed for e:\testing\testdata\d3parameter.log because of older: 0s
2020-01-28T10:15:45.707+0100 DEBUG [registrar] registrar/registrar.go:346 Registrar states cleaned up. Before: 3, After: 2, Pending: 0
2020-01-28T10:15:45.707+0100 DEBUG [registrar] registrar/registrar.go:411 Write registry file: E:\Testing\filebeat-7.5.2-windows-x86_64\data\registry\filebeat\data.json (2)
2020-01-28T10:15:45.745+0100 DEBUG [registrar] registrar/registrar.go:404 Registry file updated. 2 states written.
2020-01-28T10:15:46.652+0100 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://win10mak:9200))
2020-01-28T10:15:46.652+0100 DEBUG [logstash] logstash/async.go:111 connect
2020-01-28T10:15:46.653+0100 DEBUG [harvester] log/log.go:107 End of file reached: e:\testing\testdata\mylog.log; Backoff now.
2020-01-28T10:15:46.656+0100 INFO pipeline/output.go:105 Connection to backoff(async(tcp://win10mak:9200)) established
2020-01-28T10:15:46.658+0100 DEBUG [logstash] logstash/async.go:159 12 events out of 12 events sent to logstash host win10mak:9200. Continue sending
2020-01-28T10:15:48.653+0100 DEBUG [harvester] log/log.go:107 End of file reached: e:\testing\testdata\mylog.log; Backoff now.
2020-01-28T10:15:50.653+0100 DEBUG [multiline] multiline/multiline.go:175 Multiline event flushed because timeout reached.
2020-01-28T10:15:50.653+0100 DEBUG [processors] processing/processors.go:186 Publish event: {
[...]
2020-01-28T10:15:51.653+0100 DEBUG [logstash] logstash/async.go:159 1 events out of 1 events sent to logstash host win10mak:9200. Continue sending
2020-01-28T10:15:51.663+0100 DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [0: 0, 12]
2020-01-28T10:15:51.663+0100 DEBUG [publisher] memqueue/eventloop.go:535 broker ACK events: count=12, start-seq=1, end-seq=12
2020-01-28T10:15:51.663+0100 DEBUG [publisher] memqueue/ackloop.go:128 ackloop: return ack to broker loop:12
2020-01-28T10:15:51.663+0100 DEBUG [publisher] memqueue/ackloop.go:131 ackloop: done send ack
2020-01-28T10:15:51.663+0100 DEBUG [acker] beater/acker.go:64 stateful ack {"count": 12}
2020-01-28T10:15:51.663+0100 DEBUG [registrar] registrar/registrar.go:356 Processing 12 events
2020-01-28T10:15:51.663+0100 DEBUG [registrar] registrar/registrar.go:326 Registrar state updates processed. Count: 12
2020-01-28T10:15:51.663+0100 DEBUG [registrar] registrar/registrar.go:411 Write registry file: E:\Testing\filebeat-7.5.2-windows-x86_64\data\registry\filebeat\data.json (2)
2020-01-28T10:15:51.728+0100 DEBUG [registrar] registrar/registrar.go:404 Registry file updated. 2 states written.
2020-01-28T10:15:52.653+0100 DEBUG [harvester] log/log.go:107 End of file reached: e:\testing\testdata\mylog.log; Backoff now.
2020-01-28T10:15:55.652+0100 DEBUG [input] input/input.go:152 Run input
2020-01-28T10:15:55.652+0100 DEBUG [input] log/input.go:191 Start next scan
2020-01-28T10:15:55.652+0100 DEBUG [input] log/input.go:421 Check file for harvesting: e:\testing\testdata\mylog.log
2020-01-28T10:15:55.653+0100 DEBUG [input] log/input.go:511 Update existing file for harvesting: e:\testing\testdata\mylog.log, offset: 1883
2020-01-28T10:15:55.653+0100 DEBUG [input] log/input.go:563 Harvester for file is still running: e:\testing\testdata\mylog.log
2020-01-28T10:15:55.653+0100 DEBUG [state] file/states.go:118 State removed for e:\testing\testdata\d3parameter.log because of older: 0s
2020-01-28T10:15:55.653+0100 DEBUG [input] log/input.go:212 input states cleaned up. Before: 2, After: 1, Pending: 0
2020-01-28T10:16:00.654+0100 DEBUG [harvester] log/log.go:107 End of file reached: e:\testing\testdata\mylog.log; Backoff now.
2020-01-28T10:16:05.653+0100 DEBUG [input] input/input.go:152 Run input
2020-01-28T10:16:05.653+0100 DEBUG [input] log/input.go:191 Start next scan
2020-01-28T10:16:05.653+0100 DEBUG [input] log/input.go:421 Check file for harvesting: e:\testing\testdata\mylog.log
2020-01-28T10:16:05.654+0100 DEBUG [input] log/input.go:511 Update existing file for harvesting: e:\testing\testdata\mylog.log, offset: 1883
2020-01-28T10:16:05.654+0100 DEBUG [input] log/input.go:563 Harvester for file is still running: e:\testing\testdata\mylog.log
2020-01-28T10:16:05.654+0100 DEBUG [input] log/input.go:212 input states cleaned up. Before: 1, After: 1, Pending: 0
2020-01-28T10:16:10.654+0100 DEBUG [harvester] log/log.go:107 End of file reached: e:\testing\testdata\mylog.log; Backoff now.
2020-01-28T10:16:15.644+0100 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":109,"time":{"ms":109}},"total":{"ticks":218,"time":{"ms":218},"value":218},"user":{"ticks":109,"time":{"ms":109}}},"handles":{"open":260},"info":{"ephemeral_id":"bad1f232-d012-4644-a54b-164a8bb29dc6","uptime":{"ms":30142}},"memstats":{"gc_next":11857120,"memory_alloc":7523536,"memory_total":14872120,"rss":38051840},"runtime":{"goroutines":31}},"filebeat":{"events":{"active":1,"added":16,"done":15},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":12,"active":1,"batches":2,"total":13},"read":{"bytes":6},"type":"logstash","write":{"bytes":2017}},"pipeline":{"clients":1,"events":{"active":1,"filtered":3,"published":13,"retry":12,"total":16},"queue":{"acked":12}}},"registrar":{"states":{"cleanup":1,"current":2,"update":15},"writes":{"success":4,"total":4}},"system":{"cpu":{"cores":4}}}}}
2020-01-28T10:16:15.654+0100 DEBUG [input] input/input.go:152 Run input
2020-01-28T10:16:15.654+0100 DEBUG [input] log/input.go:191 Start next scan
2020-01-28T10:16:15.654+0100 DEBUG [input] log/input.go:421 Check file for harvesting: e:\testing\testdata\mylog.log
2020-01-28T10:16:15.655+0100 DEBUG [input] log/input.go:511 Update existing file for harvesting: e:\testing\testdata\mylog.log, offset: 1883
2020-01-28T10:16:15.655+0100 DEBUG [input] log/input.go:563 Harvester for file is still running: e:\testing\testdata\mylog.log
2020-01-28T10:16:15.655+0100 DEBUG [input] log/input.go:212 input states cleaned up. Before: 1, After: 1, Pending: 0
2020-01-28T10:16:20.655+0100 DEBUG [harvester] log/log.go:107 End of file reached: e:\testing\testdata\mylog.log; Backoff now.
2020-01-28T10:16:21.664+0100 DEBUG [transport] transport/client.go:218 handle error: read tcp 172.16.125.251:53408->172.16.125.203:9200: i/o timeout
2020-01-28T10:16:21.664+0100 ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 172.16.125.251:53408->172.16.125.203:9200: i/o timeout
2020-01-28T10:16:21.664+0100 DEBUG [transport] transport/client.go:131 closing
2020-01-28T10:16:21.665+0100 DEBUG [logstash] logstash/async.go:159 1 events out of 1 events sent to logstash host win10mak:9200. Continue sending
The "Run input" block is then repeated over and over again.