I need some advice on how to smooth out the processing for input events in logstash.
Here is how the server is currently processing events coming in:
As you can see it can fluctuate from 25 to 356. There are about 30 servers sending data which consists of metricbeat, filebeat, etc, but in this test in the screen shot, 99 percent is metricbeat data. All the servers are configured to send data every 10 seconds. Looking over the logs of the clients it is not generating any errors.
Can someone explain this behavior, I would expect it to be fairly smooth but why the spikes?
The logstash instance is configured as follows:
Cores => 3
RAM => 4GB
Workers => 9
pipeline.back_delay => 5
pipleline.batch_size => 125