Hi,
we've been testing copytruncate strategy to implement in our servers, but we found out missing logs during the rotation that are in the rotated files but not processed by logstash/not seeing in ES.
here is the example config we're using:
---
# https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-filestream
- type: filestream
id: my_log
paths:
- /var/log/my_log/my_log.log
rotation.external.strategy.copytruncate:
suffix_regex: \.\d$
count: 2
encoding: plain
fields_under_root: true
I have a script that writes every 0.1 secs to /var/log/my_log/my_log.log file.
This is one example after forcing a logrotate:
# head -5 /var/log/my_log/my_log.log
Jul 4 14:49:40 vlogstashdocker docker-exporter[13388]: 155th message
Jul 4 14:49:40 vlogstashdocker docker-exporter[13388]: 156th message
Jul 4 14:49:40 vlogstashdocker docker-eporter[13388]: 157th message
Jul 4 14:49:40 vlogstashdocker docker-exporter[13388]: 158th message
Jul 4 14:49:41 vlogstashdocker docker-exporter[13388]: 159th message
# tail -5 /var/log/my_log/my_log.log.1
Jul 4 14:49:40 vlogstashdocker dockerexporter[13388]: 149th message
Jul 4 14:49:40 vlogstashdocker docker-exporter[13388]: 150th message
Jul 4 14:49:40 vlogstashdocker docker-exporter[13388]: 151th message
Jul 4 14:49:40 vlogstashdocker docker-exporter[13388]: 152th message
Jul 4 14:49:40 vlogstashdocker docker-exporter[13388]: 153th message
I see it's missing logs, in this case from 145 to 154, as seen in ES:
copytruncate seems to be working in the logs (I've seen the ERROR message in another places, but I'm not sure it's related to this rotation):
16084:2025-07-04T16:32:55.696Z DEBUG [input.filestream] filestream/copytruncate_prospector.go:250 File /var/log/my_log/my_log.log has been updated {"id": "my_log", "prospector": "copy_truncate_file_prospector", "operation": "write", "source_name": "native::3155025-96", "os_id": "3155025-96", "new_path": "/var/log/my_log/my_log.log", "old_path": "/var/log/my_log/my_log.log"}
16085-2025-07-04T16:32:55.696Z DEBUG [input] log/input.go:286 input states cleaned up. Before: 0, After: 0, Pending: 0{"input_id": "40c25b64-5876-4424-ae84-776d4dde32e9"}
16086:2025-07-04T16:32:55.696Z DEBUG [input.filestream] filestream/copytruncate_prospector.go:271 File /var/log/my_log/my_log.log is original {"id": "my_log", "prospector": "copy_truncate_file_prospector", "operation": "write", "source_name": "native::3155025-96", "os_id": "3155025-96", "new_path": "/var/log/my_log/my_log.log", "old_path": "/var/log/my_log/my_log.log"}
16087-2025-07-04T16:32:55.696Z DEBUG [input] input/input.go:139 Run input
--
17910-2025-07-04T16:33:05.696Z DEBUG [input] input/input.go:139 Run input
17911:2025-07-04T16:33:05.696Z DEBUG [file_watcher] filestream/fswatch.go:228 File scan complete {"total": 1, "written": 0, "truncated": 0, "renamed": 0, "removed": 0, "created": 0}
17912-2025-07-04T16:33:05.696Z DEBUG [input] log/input.go:222 Start next scan {"input_id": "00b6a15e-21c4-47b7-8c5f-afa202b0b0d3"}
--
20259-2025-07-04T16:33:14.381Z ERROR [input.filestream] filestream/prospector.go:297 Error while stopping harvester group: task failures
20260: error while adding new reader to the bookkeeper harvester is already running for file {"id": "my_log", "prospector": "copy_truncate_file_prospector"}
20261:2025-07-04T16:33:14.381Z DEBUG [input.filestream] filestream/copytruncate_prospector.go:234 Prospector has stopped {"id": "my_log", "prospector": "copy_truncate_file_prospector"}
I know this is a technical preview but checking the documentation it says it'll work to fix any issues like this.
Thanks