Logstash causing jvm to crash if kept running for long intervals

We are running logstash (2.3.2) on few machine with windows 7.
JDK version - 1.8.0_31
This machine runs some other java processes also at random intervals (Using logstash to collect logs from these java processes).
Off late we are facing some issues regarding JVM crashes and in the dump files I can see the error like "The thread used up its stack" .
And in the list of loaded dlls at that point of time contains the following

jffi-1.2.dll E:\logstash-2.3.2\vendor\jruby\lib\jni\x86_64-Windows\jffi-1.2.dll 0.0.0.0

JVM crashes normally occur when logstash has been running for a while (around 2 hrs+) and causes all the java process to terminate.

Strange thing is these crashes happen on all the machines running logstash at almost the same time.

Before using logstash on those machines we were not facing any such JVM crashes.
My current understanding is that one of the threads is facing stack overflow and hence causing JVM to crash.
But each log line has around 15 fields only and those fields small size text and numerical values (on average each log line has around 125 chars).

I am using batch size for logstash as 1500 and number of cores on each machine is 4.

I have tried setting Xss to 4 mb (current value 2048k), but not sure that is correct way to handle this.

Would appreciate any help on this.