I am using LogStash to collect the logs from my service. The volume of the
data is so large (20GB/day) that I am afraid that some of the data will be
dropped at peak time.
However, I am curious about when will LogStash exceed the queue capacity
and drop messages?
Because I've done some experiments and the result shows that LogStash can
completely process all the data without any loss, e.g., local file (a 20GB
text file) --> LogStash --> local file, netcat --> LogStash --> local file.
Can someone give me a solid example (or scenario, if any) when LogStash
eventually drops messages? So I can have a better understanding about why
we need a buffer in front of it.
I am using LogStash to collect the logs from my service. The volume of
the data is so large (20GB/day) that I am afraid that some of the data will
be dropped at peak time.
However, I am curious about when will LogStash exceed the queue capacity
and drop messages?
Because I've done some experiments and the result shows that LogStash can
completely process all the data without any loss, e.g., local file (a 20GB
text file) --> LogStash --> local file, netcat --> LogStash --> local file.
Can someone give me a solid example (or scenario, if any) when LogStash
eventually drops messages? So I can have a better understanding about why
we need a buffer in front of it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.