As I understand it, Redis is the broker that feeds Logstash -- so, in theory, assuming that Logstash is running, it shouldn't be getting all that large. So why 50 GB? Any ideas on how to flush/squash/whatever that down? A restart simply has it climb back up to 50 GB again. Clearly, "I'm doing it wrong," and suggestions would be gratefully accepted.
You can use
redis-cli to talk to Redis, explore what it's storing, and empty any lists that have grown boundless. Is Logstash running? Is it fetching messages from Redis? It Logstash's processing rate at least as big as the rate of events going into Redis?
Thanks for the pointers! I will definitely dig into redis-cli, but could you give me a pointer as to how to check Logstash's processing rate vs. Redis's rate?
You could e.g. use Logstash's metrics plugin to emit a per-minute rate (there's an example in its documentation). I haven't really used Redis so I'm not sure how to measure the inbound rate (unless you can measure on the producing side, i.e. whatever's feeding Redis). You could of course use something as simple as a redis-cli call in a shell script loop to see the length of the list that Logstash pulls from. With a bit of arithmetic you can figure out the inbound rate since you know the outbound rate and the growth rate.