Hello,
One of our filebeat deployments is bumping into the below error which crashes the service overall:
fatal error: concurrent map iteration and map write
goroutine 1 [running]:
internal/runtime/maps.fatal({0x69991f8?, 0x10ad4f?})
runtime/panic.go:1058 +0x18
internal/runtime/maps.(*Iter).Next(0xc02f6a0138?)
internal/runtime/maps/table.go:683 +0x86
github.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile.(*sourceStore).TakeOver(0xc0a71ef710, 0xc0a71ef980)
github.com/elastic/beats/v7/filebeat/input/filestream/internal/input-logfile/store.go:322 +0x116
(...) !!! ~4800 lines
goroutine 3004 [select]:
(...)
github.com/elastic/beats/v7/filebeat/input/filestream.(*logFile).startFileMonitoringIfNeeded.func1({0x740c4d8?, 0xc031046a50?})
github.com/elastic/beats/v7/filebeat/input/filestream/filestream.go:163 +0x47
github.com/elastic/go-concert/unison.(*TaskGroup).Go.func1()
github.com/elastic/go-concert@v0.3.0/unison/taskgroup.go:164 +0xa3
created by github.com/elastic/go-concert/unison.(*TaskGroup).Go in goroutine 2818
github.com/elastic/go-concert@v0.3.0/unison/taskgroup.go:160 +0xed
Filebeat version: 9.3.0 for Windows.
This could be related to the registry and filestream nature of building the harvester pipes. Happens so far only in 2 VMs (same filebeat config, source load balanced), but consistently.
Logs right before the errors (marked some our registered names with XXXX):
filebeat-20260507-ndjson
[
{
"log.level": "info",
"@timestamp": "2026-05-07T10:30:51.250Z",
"log.logger": "crawler",
"log.origin": {
"function": "github.com/elastic/beats/v7/filebeat/beater.(*crawler).startInput",
"file.name": "beater/crawler.go",
"file.line": 148
},
"message": "Starting input (ID: 11793991260143422616)",
"service.name": "filebeat",
"ecs.version": "1.6.0"
},
{
"log.level": "info",
"@timestamp": "2026-05-07T10:30:51.250Z",
"log.logger": "input.filestream",
"log.origin": {
"function": "github.com/elastic/beats/v7/filebeat/input/v2/compat.(*runner).Start.func1",
"file.name": "compat/compat.go",
"file.line": 141
},
"message": "Input 'filestream' starting",
"service.name": "filebeat",
"id": "XXXX-audit-logs",
"ecs.version": "1.6.0"
},
{
"log.level": "info",
"@timestamp": "2026-05-07T10:30:51.250Z",
"log.logger": "input.filestream.metric_registry",
"log.origin": {
"function": "github.com/elastic/beats/v7/libbeat/monitoring/inputmon.NewMetricsRegistry",
"file.name": "inputmon/input.go",
"file.line": 182
},
"message": "registering",
"service.name": "filebeat",
"id": "XXXX-audit-logs",
"registry_id": "XXXX-audit-logs",
"input_id": "XXXX-audit-logs",
"input_type": "filestream",
"ecs.version": "1.6.0"
},
{
"log.level": "warn",
"@timestamp": "2026-05-07T10:30:51.254Z",
"log.logger": "input.filestream.scanner",
"log.origin": {
"function": "github.com/elastic/beats/v7/filebeat/input/filestream.(*fileScanner).GetFiles",
"file.name": "filestream/fswatch.go",
"file.line": 530
},
"message": "1 file is too small to be ingested, files need to be at least 1024 in size for ingestion to start. To change this behaviour set 'prospector.scanner.fingerprint.length' and 'prospector.scanner.fingerprint.offset'. Enable debug logging to see all file names.",
"service.name": "filebeat",
"id": "XXXX-old-XXXX-audit-logs",
"ecs.version": "1.6.0"
},
{
"log.level": "info",
"@timestamp": "2026-05-07T10:30:51.304Z",
"log.logger": "crawler",
"log.origin": {
"function": "github.com/elastic/beats/v7/filebeat/beater.(*crawler).startInput",
"file.name": "beater/crawler.go",
"file.line": 148
},
"message": "Starting input (ID: 5676926086964425933)",
"service.name": "filebeat",
"ecs.version": "1.6.0"
},
{
"log.level": "info",
"@timestamp": "2026-05-07T10:30:51.304Z",
"log.logger": "input.filestream",
"log.origin": {
"function": "github.com/elastic/beats/v7/filebeat/input/v2/compat.(*runner).Start.func1",
"file.name": "compat/compat.go",
"file.line": 141
},
"message": "Input 'filestream' starting",
"service.name": "filebeat",
"id": "XXXX-old-XXXX-audit-logs",
"ecs.version": "1.6.0"
},
{
"log.level": "info",
"@timestamp": "2026-05-07T10:30:51.304Z",
"log.logger": "input.filestream.metric_registry",
"log.origin": {
"function": "github.com/elastic/beats/v7/libbeat/monitoring/inputmon.NewMetricsRegistry",
"file.name": "inputmon/input.go",
"file.line": 182
},
"message": "registering",
"service.name": "filebeat",
"id": "XXXX-old-XXXX-audit-logs",
"registry_id": "XXXX-old-XXXX-audit-logs",
"input_id": "XXXX-old-XXXX-audit-logs",
"input_type": "filestream",
"ecs.version": "1.6.0"
},
{
"log.level": "warn",
"@timestamp": "2026-05-07T10:30:51.309Z",
"log.logger": "input.filestream.scanner",
"log.origin": {
"function": "github.com/elastic/beats/v7/filebeat/input/filestream.(*fileScanner).GetFiles",
"file.name": "filestream/fswatch.go",
"file.line": 530
},
"message": "1 file is too small to be ingested, files need to be at least 1024 in size for ingestion to start. To change this behaviour set 'prospector.scanner.fingerprint.length' and 'prospector.scanner.fingerprint.offset'. Enable debug logging to see all file names.",
"service.name": "filebeat",
"id": "XXXX-old-XXXX-audit-logs",
"ecs.version": "1.6.0"
}
]
We migrated from filebeat 8.17.2 to 9.3.0 right at the beginning of March and same configuration was working properly for quite some time, so only thing that could change was the amount of source log files to parse (and we have them quite a lot).
Once the error started to appear it seems to be always when trying to process "old audit logs" (details in logs above)
Filebeat config till that point (again, some names marked with XXXX):
Filebeat.yml
output:
logstash:
enabled: true
hosts: ["XXXX:XXXX"]
timeout: 15
ssl.certificate_authorities: ["XXXX"]
ssl.certificate: "XXXX"
ssl.key: "XXXX"
filebeat:
inputs:
- type: filestream
id: XXXX-normal-logs
paths:
- D:\Apps\Logs\XXXX\*-json*.log
- D:\Apps\Logs\XXXXX\*-json*.log
- D:\Apps\Logs\XXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXXXXXXXX\*-json*.log
- D:\Apps\Logs\XXXXXXXXXXXXXXXXX\*-json*.log
take_over.enabled: true
fields_under_root: true
prospector:
scanner:
check_interval: 45s
close:
on_state_change:
inactive: 12h
removed: true
clean_removed: true
parsers:
- ndjson:
target: ""
fields:
log_type: XXXX-json
framework_version: XXXX
audit: false
processors:
- timestamp:
field: Timestamp
layouts:
- '2023-07-12T12:40:42.4001222+00:00'
- '2006-01-02T15:04:05Z'
- '2006-01-02T15:04:05.999Z'
- '2006-01-02T15:04:05.999-07:00'
- type: filestream
enabled: true
id: iis-logs
tags: [iis]
paths:
- C:\inetpub\logs\LogFiles\*\*.log
ignore_older: 48h
fields_under_root: true
fields:
log_type: iis
framework_version: XXXX
Properties.Environment.Name: production
- type: filestream
id: XXXX-audit-logs
paths:
- D:\Apps\Logs\XXXX\audit\*\*\*.json
- D:\Apps\Logs\XXXXX\audit\*\*\*.json
- D:\Apps\Logs\XXXXXX\audit\*\*\*.json
- D:\Apps\Logs\XXXXXXX\audit\*\*\*.json
- D:\Apps\Logs\XXXXXXXX\audit\*\*\*.json
- D:\Apps\Logs\XXXXXXXXX\audit\*\*\*.json
take_over.enabled: true
prospector:
scanner:
check_interval: 1m
fields_under_root: true
harvester_limit: 10
ignore_older: 24h
clean_inactive: 25h
clean_removed: true
parsers:
- ndjson:
target: ""
close:
reader:
on_eof: true
after_interval: 2m
on_state_change:
inactive: 1m
removed: true
fields:
log_type: XXXX-audit
framework_version: XXXX
audit: true
processors:
- timestamp:
field: StartDate
layouts:
- '2006-01-02T15:04:05Z'
- '2006-01-02T15:04:05.999Z'
- '2006-01-02T15:04:05.9999999Z'
test:
- '2023-09-11T08:24:47.0221107Z'
- '2023-08-01T09:01:32.3303118Z'
- type: filestream
id: XXXX-old-XXXX-audit-logs
paths:
- D:\Apps\Logs\XXXX\*_audit-*.log
clean_removed: true
prospector:
scanner:
check_interval: 1m
close:
on_state_change:
inactive: 12h
removed: true
fields_under_root: true
parsers:
- ndjson:
target: ""
fields:
log_type: XXXX-audit
framework_version: XXXX
audit: true
Properties.Environment.Name: production
Removing "take_over" and "fields" sections didn't help and only proper workaround was converting this problematic filestream to log type as below:
- type: log
id: XXXX-old-XXXX-audit-logs
paths:
- D:\Apps\Logs\XXXX\*_audit-*.log
allow_deprecated_use: true
clean_removed: true
scan_frequency: 1m
close_inactive: 12h
close_removed: true
fields_under_root: true
parsers:
- ndjson:
target: ""
fields:
log_type: XXXX-audit
framework_version: XXXX
audit: true
Properties.Environment.Name: production
This worked, but still on one VM those errors started to appear for the filestream entries declared later in the filebeat.yml, but what's funny - those entries were not used at all because were just dummies / placeholders for data for other machines (we have generic configuration for project groups deployed accross many VMs and for this project of ~10 servers only one filebeat.yml configuration is present, where non relevant paths are just skipped by the filebeat).
Please advice.
Grzegorz