I have been parsing logs using logstash's grok parser log by log, and recently one of my pipelines.yml files which contains 100+ input files has been crashing my elastic server (out of memory).
After reading this, my guess is Elastic was previously crashing because something wasn't getting cleared out of Logstash's cache while logstash was parsing (~25GB worth of logs) and my system was running out of memory. I have not been running any Kibana queries at all - only parsing with Logstash. Could this possibly be what was making Elastic crash?
Because if not, my only other guess would be that my system simply allocated too little memory to Elastic, which seems like a more expensive problem!