OOM error with logstash S3 input over large data

I have Elasticsearch, Logstash running on a m3.xlarge.
ES 1.5.2
Logstash 1.5.0-rc3
ArchLinux

I
am trying to index 3 months log data into ELK, but after some time I
see that kernel is killing the processes because of an OOM error. http://pastie.org/10155132
I looked at the memory metrics, which was constant around 70% (with 50% mlocked by ES).

After few minutes of starting logstash, OOM kills ES.

I noticed that /tmp/logstash size is 5.5 GB. logstash ran fine for some time when I first started indexing.
Is logstash trying to bulk index all these files?

How do I avoid this? can I throttles this some how? If I change this temporary location, will that be of any help?

Thanks.