FIlebeat error-too many cases

Hi Community,
My filebeat is crashing with this error.

{"log.level":"info","@timestamp":"2022-11-30T14:39:55.579Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":185},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":1326843,"time":{"ms":5250}},"total":{"ticks":2116796,"time":{"ms":9891},"value":0},"user":{"ticks":789953,"time":{"ms":4641}}},"info":{"ephemeral_id":"76b8d6c6-b45a-45ca-b326-b1833073663e","uptime":{"ms":3069334},"version":"8.3.2"},"memstats":{"gc_next":4702023240,"memory_alloc":1842442584,"memory_total":81938981816,"rss":2966368256},"runtime":{"goroutines":151398}},"filebeat":{"events":{"active":19,"added":22,"done":3},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":1},"scans":1},"output":{"events":{"acked":3,"active":32,"batches":2,"total":32},"read":{"bytes":35},"write":{"bytes":18562}},"pipeline":{"clients":53276,"events":{"active":32,"published":23,"total":22},"queue":{"acked":3}}},"registrar":{"states":{"current":0}},"system":{"handles":{"open":1}}},"ecs.version":"1.6.0"}}
panic: reflect.Select: too many cases (max 65536)

goroutine 78 [running]:
reflect.Select({0xc0d3da0000?, 0x10001?, 0x14124?})
	/usr/local/go/src/reflect/value.go:2793 +0x79a
github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*Pipeline).runSignalPropagation(0xc000751fb8?)
	/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/pipeline.go:329 +0x1d8
created by github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*Pipeline).registerSignalPropagation.func1
	/go/src/github.com/elastic/beats/libbeat/publisher/pipeline/pipeline.go:314 +0x96

I am using filebeat on a windows machine to read data from 10-12 directories which has around 6k files. I am getting this error when the number of files increases to a certain point. So I am guessing filebeat has a limit of files it can open concurrently. But I am not sure if this is exactly that. Someone has any clue what this error mean and how to avoid it? I've attached my configuration file for reference

- type: filestream
  id: gateway
  harvester_limit: 65536
  close_inactive: 30s
  clean_inactive: 45s
  close_removed: true
  close_eof: true
  clean_removed: true
  paths:
    - 'C:\multifilelocation\*\*\*.csv'
  parsers:
    - multiline:
        pattern: ^Header
        negate: true
        match: after

Thank you

Does the harvester_limit config make a difference?

Nope, As you can see in the configuration file I shared in the question has harvester_limit as 65536. Still getting this error.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.