Filebeat upgrade requiring multiple restarts


(Sai Birada) #1

I am using filebeat-1.2.3 version which stores info in .filebeat file. Now i am upgrading to 5.1.2 and when i copied .filebeat to data/registry and restarted filebeat filebeat doesnt send any old data as expected. But its not sending any new logs to logstash also. I couldnt find any issues on logstash side also. When i stop and start the filebeat now its sending the new data without any issues. It seems i am missing some step in upgradation process. Need help to properly upgrade filebeat from 1.2.3 to 5.1.2. Logstash version i am using is 2.4.1


(Sai Birada) #2

There seems to be a bug with filebeat - 5.1.2 version. I copied .filebeat file created by older filebeat version to data/registry. Filebeat is not sending logs to logstash with the first start. Only after restart its sending logs. Following is the logs being logged in both the cases.

First start :

2017/01/04 10:03:45.576818 prospector.go:111: INFO Starting prospector of type: log
2017/01/04 10:03:45.576830 prospector_log.go:62: DBG Start next scan
2017/01/04 10:03:45.576881 prospector_log.go:212: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/04 10:03:45.576927 prospector_log.go:245: DBG Update existing file for harvesting: /var/log/heartbeat.log, offset: 36598
2017/01/04 10:03:45.576937 prospector_log.go:297: DBG Harvester for file is still running: /var/log/heartbeat.log
2017/01/04 10:03:45.576952 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
2017/01/04 10:03:50.583110 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 1
2017/01/04 10:03:50.583169 client.go:128: DBG No events to publish
2017/01/04 10:03:50.583184 sync.go:68: DBG Events sent: 1
2017/01/04 10:03:50.583199 registrar.go:269: DBG Processing 1 events
2017/01/04 10:03:50.583215 registrar.go:255: DBG Registrar states cleaned up. Before: 1, After: 1
2017/01/04 10:03:50.583227 registrar.go:292: DBG Write registry file: /sc/filebeat/data/registry
2017/01/04 10:03:50.583442 registrar.go:317: DBG Registry file updated. 1 states written.
2017/01/04 10:03:55.586849 prospector.go:155: DBG Run prospector
2017/01/04 10:03:55.586849 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 0
2017/01/04 10:03:55.586882 prospector_log.go:62: DBG Start next scan
2017/01/04 10:03:55.586966 prospector_log.go:212: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/04 10:03:55.586984 prospector_log.go:245: DBG Update existing file for harvesting: /var/log/heartbeat.log, offset: 36598
2017/01/04 10:03:55.586995 prospector_log.go:297: DBG Harvester for file is still running: /var/log/heartbeat.log
2017/01/04 10:03:55.587010 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
^C2017/01/04 10:03:57.875554 service.go:32: DBG Received sigterm/sigint, stopping
2017/01/04 10:03:57.875635 filebeat.go:168: INFO Stopping filebeat

In reality the offset has been updated long back, but filebeat is not able to update it in first start. But if i restart
second start :

2017/01/04 10:05:57.879692 prospector_log.go:62: DBG Start next scan
2017/01/04 10:05:57.879755 prospector_log.go:212: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/04 10:05:57.879775 prospector_log.go:245: DBG Update existing file for harvesting: /var/log/heartbeat.log, offset: 40948
2017/01/04 10:05:57.879786 prospector_log.go:297: DBG Harvester for file is still running: /var/log/heartbeat.log
2017/01/04 10:05:57.879801 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
2017/01/04 10:05:57.910264 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 0
2017/01/04 10:06:02.910948 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 0
2017/01/04 10:06:02.911028 log_file.go:84: DBG End of file reached: /var/log/heartbeat.log; Backoff now.
2017/01/04 10:06:03.930773 log_file.go:84: DBG End of file reached: /var/log/heartbeat.log; Backoff now.
2017/01/04 10:06:05.947757 log_file.go:84: DBG End of file reached: /var/log/heartbeat.log; Backoff now.
2017/01/04 10:06:07.897223 prospector.go:155: DBG Run prospector
2017/01/04 10:06:07.897268 prospector_log.go:62: DBG Start next scan
2017/01/04 10:06:07.897337 prospector_log.go:212: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/04 10:06:07.897356 prospector_log.go:245: DBG Update existing file for harvesting: /var/log/heartbeat.log, offset: 40977
2017/01/04 10:06:07.897367 prospector_log.go:297: DBG Harvester for file is still running: /var/log/heartbeat.log
2017/01/04 10:06:07.897383 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
2017/01/04 10:06:07.927240 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 1
2017/01/04 10:06:07.927386 client.go:184: DBG Publish: {
"@timestamp": "2017-01-04T10:06:02.910Z",
"beat": {
"hostname": "NSS",
"name": "NSS",
"version": "5.1.2"
},
"fields": {
"cloudname": "zscaler.net",
"hostip": "10.10.35.180"
},
"input_type": "log",
"message": "Wed Jan 4 10:06:00 UTC 2017",
"offset": 40977,
"source": "/var/log/heartbeat.log",
"type": "heartbeat_Etc/GMT+0"
}
2017/01/04 10:06:07.927417 output.go:109: DBG output worker: publish 1 events
2017/01/04 10:06:07.927432 sync.go:107: DBG Try to publish 1 events to logstash with window size 67
2017/01/04 10:06:07.959783 client.go:194: DBG handle error: EOF
2017/01/04 10:06:07.959804 client.go:110: DBG closing
2017/01/04 10:06:07.959844 sync.go:78: DBG 0 events out of 1 events sent to logstash. Continue sending
2017/01/04 10:06:07.959856 sync.go:58: DBG close connection
2017/01/04 10:06:07.959868 sync.go:85: ERR Failed to publish events caused by: EOF
2017/01/04 10:06:07.959880 single.go:91: INFO Error publishing events (retrying): EOF
2017/01/04 10:06:07.959890 sync.go:58: DBG close connection
2017/01/04 10:06:07.959899 single.go:156: DBG send fail
2017/01/04 10:06:08.977016 sync.go:53: DBG connect
2017/01/04 10:06:08.977591 sync.go:107: DBG Try to publish 1 events to logstash with window size 33
2017/01/04 10:06:09.019525 sync.go:78: DBG 1 events out of 1 events sent to logstash. Continue sending
2017/01/04 10:06:09.019552 single.go:150: DBG send completed


(ruflin) #3

Looks like the finished state is incorrect after the migration and a restart resets the finished state. Do you still have your old registry file from the 1.2.3 version?

Also do you still have the 2 log files? The "interesting" part of the migration happens before the log lines you posted. Any chance to share the log files from the beginning?


(Sai Birada) #4

HI ruflin,
I can consistently see this behaviour, i am attaching all the log file you asked.
.filebeat file created by 1.2.3
{"/var/log/heartbeat.log":{"source":"/var/log/heartbeat.log","offset":85724,"FileStateOS":{"inode":5087299,"device":82}}}
data/registry file created by 5.1.2
[{"source":"/var/log/heartbeat.log","offset":85782,"FileStateOS":{"inode":5087299,"device":82},"timestamp":"2017-01-05T11:52:28.338172375Z","ttl":-1}]


(Sai Birada) #5

1.2.3 logs sample
2017/01/05 11:46:43.514989 beat.go:135: DBG Initializing output plugins
2017/01/05 11:46:43.515034 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2017/01/05 11:46:43.515588 logstash.go:106: INFO Max Retries set to: 3
2017/01/05 11:46:43.515609 client.go:100: DBG connect
2017/01/05 11:46:43.520967 outputs.go:126: INFO Activated logstash as output plugin.
2017/01/05 11:46:43.520990 publish.go:232: DBG Create output worker
2017/01/05 11:46:43.521029 publish.go:274: DBG No output is defined to store the topology. The server fields might not be filled.
2017/01/05 11:46:43.521060 publish.go:288: INFO Publisher name: NSS
2017/01/05 11:46:43.521138 async.go:78: INFO Flush Interval set to: 1s
2017/01/05 11:46:43.521151 async.go:84: INFO Max Bulk Size set to: 2048
2017/01/05 11:46:43.521160 async.go:92: DBG create bulk processing worker (interval=1s, bulk size=2048)
2017/01/05 11:46:43.521208 beat.go:147: INFO Init Beat: filebeat; Version: 1.2.3
2017/01/05 11:46:43.521528 beat.go:173: INFO filebeat sucessfully setup. Start running.
2017/01/05 11:46:43.521571 registrar.go:68: INFO Registry file set to: /sc/filebeat/.filebeat
2017/01/05 11:46:43.521598 spooler.go:44: DBG Set idleTimeoutDuration to 5s
2017/01/05 11:46:43.521613 crawler.go:38: DBG File Configs: [/var/log/heartbeat.log]
2017/01/05 11:46:43.521632 prospector.go:133: INFO Set ignore_older duration to 0s
2017/01/05 11:46:43.521642 prospector.go:133: INFO Set close_older duration to 1h0m0s
2017/01/05 11:46:43.521651 prospector.go:133: INFO Set scan_frequency duration to 10s
2017/01/05 11:46:43.521660 prospector.go:90: INFO Invalid input type set:
2017/01/05 11:46:43.521668 prospector.go:93: INFO Input type set to: log
2017/01/05 11:46:43.521676 prospector.go:133: INFO Set backoff duration to 1s
2017/01/05 11:46:43.521684 prospector.go:133: INFO Set max_backoff duration to 10s
2017/01/05 11:46:43.521691 prospector.go:113: INFO force_close_file is disabled
2017/01/05 11:46:43.521701 crawler.go:58: DBG Waiting for 1 prospectors to initialise
2017/01/05 11:46:43.521722 prospector.go:143: INFO Starting prospector of type: log
2017/01/05 11:46:43.521752 prospector.go:261: DBG scan path /var/log/heartbeat.log
2017/01/05 11:46:43.521780 prospector.go:275: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/05 11:46:43.521833 prospector.go:345: DBG Start harvesting unknown file: /var/log/heartbeat.log
2017/01/05 11:46:43.521846 prospector.go:399: DBG Launching harvester on new file: /var/log/heartbeat.log
2017/01/05 11:46:43.521868 crawler.go:71: DBG Registrar will re-save state for /var/log/heartbeat.log
2017/01/05 11:46:43.521879 crawler.go:65: DBG No pending prospectors. Finishing setup
2017/01/05 11:46:43.521890 crawler.go:78: INFO All prospectors initialised with 1 states to persist
2017/01/05 11:46:43.521902 registrar.go:87: INFO Starting Registrar
2017/01/05 11:46:43.521922 publish.go:88: INFO Start sending events to output
2017/01/05 11:46:43.522003 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/01/05 11:46:43.522055 log.go:283: DBG harvest: "/var/log/heartbeat.log" (offset snapshot:0)
2017/01/05 11:46:43.522070 log.go:113: INFO Harvester started for file: /var/log/heartbeat.log
2017/01/05 11:46:43.535784 prospector.go:261: DBG scan path /var/log/heartbeat.log
2017/01/05 11:46:43.535840 prospector.go:275: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/05 11:46:43.535872 registrar.go:175: DBG Same file as before found. Fetch the state.
2017/01/05 11:46:43.535901 prospector.go:418: DBG Update existing file for harvesting: /var/log/heartbeat.log
2017/01/05 11:46:43.535913 prospector.go:465: DBG Not harvesting, file didn't change: /var/log/heartbeat.log
2017/01/05 11:46:43.535781 util.go:20: DBG full line read
2017/01/05 11:46:43.575805 spooler.go:91: DBG Flushing spooler because spooler full. Events flushed: 2048
2017/01/05 11:50:10.509945 publish.go:109: DBG Publish: {


}
2017/01/05 11:50:10.509985 output.go:87: DBG output worker: publish 908 events
2017/01/05 11:50:10.510005 client.go:146: DBG Try to publish 908 events to logstash with window size 857
2017/01/05 11:50:10.855642 client.go:227: DBG update max ok window size: 608 < 857
2017/01/05 11:50:10.855686 client.go:231: DBG increase window size to: 1286
2017/01/05 11:50:10.855700 client.go:234: DBG set to batchSize: 908
2017/01/05 11:50:10.855717 client.go:124: DBG 857 events out of 908 events sent to logstash. Continue sending ...
2017/01/05 11:50:10.855731 client.go:146: DBG Try to publish 51 events to logstash with window size 908
2017/01/05 11:50:10.906327 client.go:124: DBG 51 events out of 51 events sent to logstash. Continue sending ...
2017/01/05 11:50:10.906354 single.go:135: DBG send completed
2017/01/05 11:50:10.906393 publish.go:104: INFO Events sent: 908
2017/01/05 11:50:10.906419 registrar.go:115: DBG Processing 908 events
2017/01/05 11:50:10.906911 registrar.go:146: DBG Write registry file: /sc/filebeat/.filebeat
2017/01/05 11:50:10.907025 registrar.go:162: INFO Registry file updated. 1 states written.
2017/01/05 11:50:16.809629 service.go:30: DBG Received sigterm/sigint, stopping
2017/01/05 11:50:16.809692 registrar.go:133: INFO Stopping Registrar
2017/01/05 11:50:16.809720 registrar.go:97: INFO Ending Registrar
2017/01/05 11:50:16.809751 registrar.go:146: DBG Write registry file: /sc/filebeat/.filebeat
2017/01/05 11:50:16.809880 registrar.go:162: INFO Registry file updated. 1 states written.
2017/01/05 11:50:16.809940 beat.go:183: INFO Cleaning up filebeat before shutting down.


(Sai Birada) #6

5.1.2 first time logs sample
2017/01/05 11:51:08.179309 beat.go:267: INFO Home path: [/sc/filebeat] Config path: [/sc/filebeat] Data path: [/sc/filebeat/data] Logs path: [/sc/filebeat/logs]
2017/01/05 11:51:08.179357 beat.go:177: INFO Setup Beat: filebeat; Version: 5.1.2
2017/01/05 11:51:08.179384 processor.go:43: DBG Processors:
2017/01/05 11:51:08.179400 beat.go:183: DBG Initializing output plugins
2017/01/05 11:51:08.179558 outputs.go:106: INFO Activated logstash as output plugin.
2017/01/05 11:51:08.179577 publish.go:234: DBG Create output worker
2017/01/05 11:51:08.179644 publish.go:276: DBG No output is defined to store the topology. The server fields might not be filled.
2017/01/05 11:51:08.179765 async.go:63: INFO Flush Interval set to: 1s
2017/01/05 11:51:08.179783 async.go:64: INFO Max Bulk Size set to: 2048
2017/01/05 11:51:08.179797 async.go:72: DBG create bulk processing worker (interval=1s, bulk size=2048)
2017/01/05 11:51:08.179958 beat.go:207: INFO filebeat start running.
2017/01/05 11:51:08.180012 registrar.go:85: INFO Registry file set to: /sc/filebeat/data/registry
2017/01/05 11:51:08.180060 registrar.go:106: INFO Loading registrar data from /sc/filebeat/data/registry
2017/01/05 11:51:08.180199 registrar.go:177: INFO Old registry states found: 1
2017/01/05 11:51:08.180222 registrar.go:292: DBG Write registry file: /sc/filebeat/data/registry
2017/01/05 11:51:08.180407 registrar.go:317: DBG Registry file updated. 1 states written.
2017/01/05 11:51:08.180425 registrar.go:184: INFO Old states converted to new states and written to registrar: 1
2017/01/05 11:51:08.180457 crawler.go:34: INFO Loading Prospectors: 1
2017/01/05 11:51:08.180545 state.go:64: DBG New state added for /var/log/heartbeat.log
2017/01/05 11:51:08.180558 prospector_log.go:57: INFO Prospector with previous states loaded: 1
2017/01/05 11:51:08.180661 prospector.go:69: DBG File Configs: [/var/log/heartbeat.log]
2017/01/05 11:51:08.180686 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
2017/01/05 11:51:08.180703 crawler.go:61: INFO All prospectors are initialised and running with 1 states to persist
2017/01/05 11:51:08.181013 logp.go:219: INFO Metrics logging every 30s
2017/01/05 11:51:08.181091 registrar.go:230: INFO Starting Registrar
2017/01/05 11:51:08.181125 sync.go:41: INFO Start sending events to output
2017/01/05 11:51:08.181171 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/01/05 11:51:08.181199 crawler.go:56: DBG Starting prospector 0
2017/01/05 11:51:08.181225 prospector.go:111: INFO Starting prospector of type: log
2017/01/05 11:51:08.181237 prospector_log.go:62: DBG Start next scan
2017/01/05 11:51:08.181286 prospector_log.go:212: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/05 11:51:08.181329 prospector_log.go:245: DBG Update existing file for harvesting: /var/log/heartbeat.log, offset: 85724
2017/01/05 11:51:08.181342 prospector_log.go:297: DBG Harvester for file is still running: /var/log/heartbeat.log
2017/01/05 11:51:08.181359 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
2017/01/05 11:51:13.187254 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 1
2017/01/05 11:51:13.187328 client.go:128: DBG No events to publish
2017/01/05 11:51:13.187346 sync.go:68: DBG Events sent: 1
2017/01/05 11:51:13.187364 registrar.go:269: DBG Processing 1 events
2017/01/05 11:51:13.187382 registrar.go:255: DBG Registrar states cleaned up. Before: 1, After: 1
2017/01/05 11:51:13.187395 registrar.go:292: DBG Write registry file: /sc/filebeat/data/registry
2017/01/05 11:51:13.187604 registrar.go:317: DBG Registry file updated. 1 states written.
2017/01/05 11:51:18.190895 prospector.go:155: DBG Run prospector
2017/01/05 11:51:18.190922 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 0
2017/01/05 11:52:08.263193 prospector_log.go:62: DBG Start next scan
2017/01/05 11:52:08.263263 prospector_log.go:212: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/05 11:52:08.263286 prospector_log.go:245: DBG Update existing file for harvesting: /var/log/heartbeat.log, offset: 85724
2017/01/05 11:52:08.263298 prospector_log.go:297: DBG Harvester for file is still running: /var/log/heartbeat.log
2017/01/05 11:52:08.263315 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
2017/01/05 11:52:08.323124 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 0
2017/01/05 11:52:12.071676 service.go:32: DBG Received sigterm/sigint, stopping
2017/01/05 11:52:12.071760 filebeat.go:168: INFO Stopping filebeat
2017/01/05 11:52:12.071783 crawler.go:67: INFO Stopping Crawler
2017/01/05 11:52:12.071799 crawler.go:73: INFO Stopping 1 prospectors
2017/01/05 11:52:12.071821 prospector.go:181: INFO Stopping Prospector
2017/01/05 11:52:12.071856 prospector.go:128: INFO Prospector channel stopped
2017/01/05 11:52:12.071886 prospector.go:152: INFO Prospector ticker stopped
2017/01/05 11:52:12.071900 crawler.go:54: DBG Prospector 0 stopped
2017/01/05 11:52:12.071913 crawler.go:80: INFO Crawler stopped
2017/01/05 11:52:12.071925 spooler.go:101: INFO Stopping spooler
2017/01/05 11:52:12.071942 spooler.go:109: DBG Spooler has stopped
2017/01/05 11:52:12.071959 sync.go:47: DBG Shutting down sync publisher
2017/01/05 11:52:12.071972 registrar.go:285: INFO Stopping Registrar
2017/01/05 11:52:12.071985 registrar.go:242: INFO Ending Registrar
2017/01/05 11:52:12.072000 registrar.go:292: DBG Write registry file: /sc/filebeat/data/registry
2017/01/05 11:52:12.072220 registrar.go:317: DBG Registry file updated. 1 states written.
2017/01/05 11:52:12.072326 logp.go:245: INFO Total non-zero values: publish.events=1 registar.states.current=1 registrar.writes=3 registrar.states.update=1
2017/01/05 11:52:12.072347 logp.go:246: INFO Uptime: 1m3.896865454s
2017/01/05 11:52:12.072358 beat.go:211: INFO filebeat stopped.


(Sai Birada) #7

5.1.2 after restart logs sample
2017/01/05 11:52:23.305607 beat.go:267: INFO Home path: [/sc/filebeat] Config path: [/sc/filebeat] Data path: [/sc/filebeat/data] Logs path: [/sc/filebeat/logs]
2017/01/05 11:52:23.305642 beat.go:177: INFO Setup Beat: filebeat; Version: 5.1.2
2017/01/05 11:52:23.305662 processor.go:43: DBG Processors:
2017/01/05 11:52:23.305673 beat.go:183: DBG Initializing output plugins
2017/01/05 11:52:23.305737 logstash.go:90: INFO Max Retries set to: 3
2017/01/05 11:52:23.305802 outputs.go:106: INFO Activated logstash as output plugin.
2017/01/05 11:52:23.305816 publish.go:234: DBG Create output worker
2017/01/05 11:52:23.305884 publish.go:276: DBG No output is defined to store the topology. The server fields might not be filled.
2017/01/05 11:52:23.305909 publish.go:291: INFO Publisher name: NSS
2017/01/05 11:52:23.305978 async.go:63: INFO Flush Interval set to: 1s
2017/01/05 11:52:23.305990 async.go:64: INFO Max Bulk Size set to: 2048
2017/01/05 11:52:23.306000 async.go:72: DBG create bulk processing worker (interval=1s, bulk size=2048)
2017/01/05 11:52:23.306119 beat.go:207: INFO filebeat start running.
2017/01/05 11:52:23.306158 registrar.go:85: INFO Registry file set to: /sc/filebeat/data/registry
2017/01/05 11:52:23.306192 registrar.go:106: INFO Loading registrar data from /sc/filebeat/data/registry
2017/01/05 11:52:23.306319 registrar.go:131: INFO States Loaded from registrar: 1
2017/01/05 11:52:23.306347 crawler.go:34: INFO Loading Prospectors: 1
2017/01/05 11:52:23.306391 prospector_log.go:41: DBG exclude_files: []
2017/01/05 11:52:23.306411 state.go:64: DBG New state added for /var/log/heartbeat.log
2017/01/05 11:52:23.306421 prospector_log.go:57: INFO Prospector with previous states loaded: 1
2017/01/05 11:52:23.306488 prospector.go:69: DBG File Configs: [/var/log/heartbeat.log]
2017/01/05 11:52:23.306501 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
2017/01/05 11:52:23.306512 crawler.go:61: INFO All prospectors are initialised and running with 1 states to persist
2017/01/05 11:52:23.306756 logp.go:219: INFO Metrics logging every 30s
2017/01/05 11:52:23.306826 registrar.go:230: INFO Starting Registrar
2017/01/05 11:52:23.306852 sync.go:41: INFO Start sending events to output
2017/01/05 11:52:23.306884 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/01/05 11:52:23.306904 crawler.go:56: DBG Starting prospector 0
2017/01/05 11:52:23.306922 prospector.go:111: INFO Starting prospector of type: log
2017/01/05 11:52:23.306930 prospector_log.go:62: DBG Start next scan
2017/01/05 11:52:23.306962 prospector_log.go:212: DBG Check file for harvesting: /var/log/heartbeat.log
2017/01/05 11:52:23.306990 prospector_log.go:245: DBG Update existing file for harvesting: /var/log/heartbeat.log, offset: 85724
2017/01/05 11:52:23.306999 prospector_log.go:254: DBG Resuming harvesting of file: /var/log/heartbeat.log, offset: 85724
2017/01/05 11:52:23.307091 log.go:251: DBG Set previous offset for file: /var/log/heartbeat.log. Offset: 85724
2017/01/05 11:52:23.307106 log.go:241: DBG Setting offset for file: /var/log/heartbeat.log. Offset: 85724
2017/01/05 11:52:23.307132 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
2017/01/05 11:52:23.307168 log.go:84: INFO Harvester started for file: /var/log/heartbeat.log
2017/01/05 11:52:23.307265 log_file.go:84: DBG End of file reached: /var/log/heartbeat.log; Backoff now.
2017/01/05 11:52:28.318070 spooler.go:89: DBG Flushing spooler because of timeout. Events flushed: 4
2017/01/05 11:52:28.318392 client.go:184: DBG Publish: {


}
2017/01/05 11:52:28.318532 output.go:109: DBG output worker: publish 2 events
2017/01/05 11:52:28.318546 sync.go:53: DBG connect
2017/01/05 11:52:28.319078 sync.go:107: DBG Try to publish 2 events to logstash with window size 10
2017/01/05 11:52:28.337983 sync.go:78: DBG 2 events out of 2 events sent to logstash. Continue sending
2017/01/05 11:52:28.338008 single.go:150: DBG send completed
2017/01/05 11:52:28.338045 sync.go:68: DBG Events sent: 4
2017/01/05 11:52:28.338149 registrar.go:269: DBG Processing 4 events
2017/01/05 11:52:28.338178 registrar.go:255: DBG Registrar states cleaned up. Before: 1, After: 1
2017/01/05 11:52:28.338191 registrar.go:292: DBG Write registry file: /sc/filebeat/data/registry
2017/01/05 11:52:28.338470 registrar.go:317: DBG Registry file updated. 1 states written.
2017/01/05 11:52:30.337559 log_file.go:84: DBG End of file reached: /var/log/heartbeat.log; Backoff now.
2017/01/05 11:52:38.038649 service.go:32: DBG Received sigterm/sigint, stopping
2017/01/05 11:52:38.038707 filebeat.go:168: INFO Stopping filebeat

2017/01/05 11:52:38.039363 logp.go:246: INFO Uptime: 14.736634753s
2017/01/05 11:52:38.039384 beat.go:211: INFO filebeat stopped.


(ruflin) #8

Thanks, will have a look at it later. For log files best paste them into a gist and link the gist here.


(ruflin) #9

Ok, I managed to reliably reproduce this. Now I need to find the bug. I will keep you posted. Thanks for sharing all the details.


(ruflin) #10

And here is a fix for it: https://github.com/elastic/beats/pull/3322


(system) #11

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.