Hi!
I have noticed I'm loosing data in logstash, when I send 500 packages from my application, logstash only send 300 more or less.
I have included a counter in the packages sent and I have realized I'm loosing data after 300 UDP packages received. What should I do to solve it? I have though about changing SizedQueue in Logstash or including directly 2 logstash instances with a REDIS buffer.
What is the optimal solution for not losing data?
I have included an output file in logstash configuration in order to count how many packages I'm processing in logstash and the output elasticsearh. I'm receiving more data in elasticsearch than in the output file. How could it be possible? Maybe could be a output file delay problem.
What do you think?
If Logstash is not able to process requests fast enough, it will apply back pressure, which with the inputs you have specified (TCP and UDP based) can lead to data loss. In order to avoid this, a buffering mechanism like Redis is often introduced, and you would have one Logstash instance that is responsible for capturing data and enqueueing it in Redis as quickly as possible. This should do minimal processing in order to be as fast. You would then have one or more separate Logstash instances that read off the Redis queue, do all the processing and send the data to the outputs.
There is a webinar recording available that discusses this in greater detail.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.