Logstash not working on selected servers in production


(xyz_12) #1

since the issue is related with PROD in terms of logstash relted and the same issue was occured in lower level environment.
. However we have gone through the logs resepect to the server and noticed the below exception. so what we are thinking
that it the threshold reached to 1500000 the server has to be restarted.so kindly let us know is this good approch? if yes then we
can go ahead with that else kindly suggest to us any solution for this.

:timestamp=>"2017-06-22T05:40:45.135000+0000", :message=>"Redis key size has hit a congestion threshold 500000 suspending output for 5 seconds", :level=>:warn}"

could u please give me any one solution for this .


(Magnus Bäck) #2

Doesn't this indicate that Logstash is pushing data to Redis faster than the reading party is able to process it? If so the obvious solution is to speed up the processing.


(xyz_12) #3

Can I increase threshold value 1500000 ?


(Magnus Bäck) #4

Probably, but why do you think that'll solve the problem? The congestion threshold is there for a reason.

If you're filling a bath tub at a higher rate that you're draining it, switching to a larger bathtub will only give temporary relief. Your drain is still too slow.


(xyz_12) #5

cloud please give valid solution for logstash


(Christian Dahlqvist) #6

How have you determined that Logstash is the bottleneck? What throughput are you currently seeing and what throughput are your downstream systems, e.g. Elasticsearch, able to handle?


(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.