The documentation says what the harvest_buffer_size
setting of the log
input is for, but I don't see anything about why one might want to change it.
I noticed that a Filebeat 7.17 instance on a CentOS 7 server that is sending a lot of data from multiple files to the Logstash layer our of logging system was using 10s of GB of memory and causing a problem with memory exhaustion. harvester_buffer_size
had been set to 1048576000
but I don't know why. Commenting out that bit of the config has greatly reduced the memory usage and casual observation shows data still reaching our cluster at the same rate. Other than because you would want Filebeat to use more memory, why would one increase harvester_buffer_size
above the default?