Logstash 2.2.4 running out of memory

I'm trying to run logstash 2.2.4 on a number of Amazon instances. On one set of instances (t2.medium, plus two t2.small instances) logstash seems to run correctly. On the other (all t2.small instances), logstash starts up, uses very large amounts of CPU, and then crashes with out-of-memory errors.

The instances have the same version of java, and are otherwise configured identically.

To test logstash, I set up the following configuration file which I called 'logstash-simple.cfg':

input { stdin { } }
output {
stdout { codec => rubydebug }
}

I then ran it with:

/opt/logstash/bin/logstash -f logstash-simple.cfg

On the 'good' instance, this ran without problems. On the 'bad' instances, it starts up, writes:

Settings: Default pipeline workers: 1
Logstash startup completed

and then, after a short period, dies with an out-of-memory error.

Can anyone suggest why logstash 2.2.4 should work on one set of machines, and not another? It's apparently not memory-related, because it works fine on one set of t2.small instances, but not on the others. Both are running on the same java version (1.7.0_101, OpenJDK RE). It's not a configuration file issue because both are running the same, absolutely minimal configuration. As far as I can tell, the only difference is that one set of instances is in California (problematic) and the other set is in Virginia (work fine). Does logstash just not like being on the west coast or something?