Hi Michael-
I ran through some more tests and got mixed results
-
delete /opt/registry
-
gen 1000 lines of logs
-
start filebeat
1000 lines pushed to kafka
/opt/registry does not exist yet
-
stop filebeat ( ctrl-c )
/opt/registry does not exist
logging ends with "filebeat cleanup"
-
start filebeat again
same 1000 lines processed again, this is expected
/opt/registry does not exist yet
-
stop filebeat ( ctrl-c )
/opt/registry.new is created, but is empty
logging ends with "Write registry file: /opt/registry"
====== clean up =====
delete logfile
delete /opt/registry.new
- gen 5000 lines of logs
- start filebeat
2048 flushed
2047 flushed
905 flushed
no /opt/registry file created
- gen 5000 more at 2016-05-06T19:32:54-04:00
nothing processed
- gen 5000 more at 2016-05-06T19:36:33-04:00
nothing processed
- gen 5000 more to different logfile at 2016-05-06T19:38:18-04:00
file detected, log shows harvesting, nothing processed
still no /opt/registry file
6) stop filebeat ( ctrl-c )
no /opt/registry file created
STDOUT shows the following, this is different from previous iterations where nothing showed on STDOUT
[/opt] # ./filebeat -c ./filebeat.test.yml
^Cpanic: send on closed channel
goroutine 34 [running]:
panic(0x8cc420, 0xc82066a3d0)
/usr/local/go/src/runtime/panic.go:481 +0x3e6
github.com/elastic/beats/filebeat/crawler.(*Prospector).Run.func1(0xc8201641c0)
/go/src/github.com/elastic/beats/filebeat/crawler/prospector.go:99 +0x139
created by github.com/elastic/beats/filebeat/crawler.(*Prospector).Run
/go/src/github.com/elastic/beats/filebeat/crawler/prospector.go:102 +0x11b
[/opt] #
I am seeing some kafka errors every 60 seconds, could a failure to send to kafka cause filebeats to skip processing that batch of 2048 lines? It reconnects every time, would this just be different settings between kafka and filebeats regarding the lifetime of an open connection?
Example:
2016-05-06T19:40:49-04:00 WARN kafka message: client/metadata got error from broker while fetching metadata:%!(EXTRA *net.OpError=read tcp 172.31.254.130:33359->10.99.10.225:9092: i/o timeout)
2016-05-06T19:40:49-04:00 WARN Closed connection to broker 10.99.10.225:9092
2016-05-06T19:40:49-04:00 WARN kafka message: client/metadata no available broker to send metadata request to
2016-05-06T19:40:49-04:00 WARN client/brokers resurrecting 1 dead seed brokers
2016-05-06T19:40:49-04:00 WARN client/metadata retrying after 250ms... (2 attempts remaining)
2016-05-06T19:40:49-04:00 WARN client/metadata fetching metadata for all topics from broker 10.99.10.225:9092
2016-05-06T19:40:49-04:00 WARN Connected to broker at 10.99.10.225:9092 (unregistered)
2016-05-06T19:40:54-04:00 INFO Run prospector
I'm running through some additional testing with a file output to see if this is related to the kafka output, but we really need this working to kafka
Thanks again for your help,
Kevin