Large number of files causing IO Spike

Hello,

There are >300 inputs files needs to monitored by filebeat. We are facing IO spike. We are not in the position to tweak harvest_limit or scan frequency as this would delay the detection of file change.

Please let me know if there is any other solution.

Thanks in Advance!

What does your yml look like? For obvious reason remove the server connection info.

IO spikes can be lots of things... Obvious cause filebeat but lets start with some easy info. Is the underlying hardware able to support the added IO that the beats agents add. What OS are you using? Is it a Linux OS with default disk write size that doesn't match up to the hardware underneath? If your running an old san array and it was designed to only deliver 500iops with the odd's of it surviving are close to none existent as that low output would have a latency way to higher to be usable.

For instance for every 50 agents I add I end up with a 3400iops addition with a 250~300Mbs read/write add to the underlying platform. I use CentOS to run Elastic and the first time I had 100 host I crushed the host underneath. Turns out I had to modify the disk write from the 1.5k default down to 1k to match up with the host and low and behold I still have spikes when I move stuff down to warm and shrink the index but it has no noticeable user effects. Please note the hardware I'm on can take well into the 1mil iops and 9Gbs throughput at least benchmark wise. Real world is not close to that....

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.