Logstash running out of memory, tcp{} input

Error: Your application used more memory than the safety cap of 500M

I began encountering this crash/error condition a few days ago when setting up Logstash 2.0.0 alongside Elasticsearch 2.0.0, both installed via RPM. Platform is RHEL 6.7

I have found several threads on here as well as GitHub with discussion around this issue, but there does not seem to be a consensus.

Over the past few days I have been experimenting to rule out the possibility that my implementation simply requires a larger memory cap; I believe I have done so, as I have grown LS_HEAP_SIZE to 2g and the crash still occurs, though it takes longer to realize. Furthermore, I had a very similar set up previously, using Logstash 1.5.x and Elasticsearch 1.7.x, where the issue was not present. The only difference in the logstash config between then and now being the input codec - previously it was file{}, whereas now it is tcp{}. There is actually less data overall being ingested now as compared with previously, because of a separate issue with my syslog-ng configuration.

Is there a known memory leak in the tcp{} input plugin for 2.0.0? If not, how can I begin to troubleshoot this more effectively?

To update-

I have modified my configuration to read from a file{} input, and so far, Logstash has been running smoothly with the default value for LS_HEAP_SIZE (500m I believe), for several hours now. I have to conclude that there is some sort of memory leak when using the tcp{} input plugin in version 2.0.0.

My workaround simply involves standing up syslog-ng on the machine receiving log data over TCP, and writing that information out to a file first, from which Logstash then picks it up. This should be acceptable for my use case, but is obviously less efficient and only necessary because of the memory issue when using the tcp{} input with Logstash.