Hi guys,
I have a specific situation where I want to index ~8000 files from a network share. Those are IIS logs that are rotated regularly.
Is this something that one 2.0.0 agent on Linux can handle, or is it maybe smarter to split the workload into multiple separate agents?
Some of these logs will be indexed once and that's it (because they are rotated files), while other wil have to be indexed in real time (logs will flow to them).
I've tried with a single file {} input, but after some time processing would just halt.
Do you guys have experience with such a high number of files?