since the issue is related with PROD in terms of logstash relted and the same issue was occured in lower level environment.
. However we have gone through the logs resepect to the server and noticed the below exception. so what we are thinking
that it the threshold reached to 1500000 the server has to be restarted.so kindly let us know is this good approch? if yes then we
can go ahead with that else kindly suggest to us any solution for this.
:timestamp=>"2017-06-22T05:40:45.135000+0000", :message=>"Redis key size has hit a congestion threshold 500000 suspending output for 5 seconds", :level=>:warn}"
could u please give me any one solution for this .
Doesn't this indicate that Logstash is pushing data to Redis faster than the reading party is able to process it? If so the obvious solution is to speed up the processing.
Probably, but why do you think that'll solve the problem? The congestion threshold is there for a reason.
If you're filling a bath tub at a higher rate that you're draining it, switching to a larger bathtub will only give temporary relief. Your drain is still too slow.
How have you determined that Logstash is the bottleneck? What throughput are you currently seeing and what throughput are your downstream systems, e.g. Elasticsearch, able to handle?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.