Error: Your application used more memory than the safety cap of 500M
I began encountering this crash/error condition a few days ago when setting up Logstash 2.0.0 alongside Elasticsearch 2.0.0, both installed via RPM. Platform is RHEL 6.7
I have found several threads on here as well as GitHub with discussion around this issue, but there does not seem to be a consensus.
Over the past few days I have been experimenting to rule out the possibility that my implementation simply requires a larger memory cap; I believe I have done so, as I have grown LS_HEAP_SIZE to 2g and the crash still occurs, though it takes longer to realize. Furthermore, I had a very similar set up previously, using Logstash 1.5.x and Elasticsearch 1.7.x, where the issue was not present. The only difference in the logstash config between then and now being the input codec - previously it was file{}, whereas now it is tcp{}. There is actually less data overall being ingested now as compared with previously, because of a separate issue with my syslog-ng configuration.
Is there a known memory leak in the tcp{} input plugin for 2.0.0? If not, how can I begin to troubleshoot this more effectively?