Memory issue with logstash s3 input


(Pradeep Reddy) #1

I have Elasticsearch, Logstash running on a m3.xlarge.
ES 1.5.2
Logstash 1.5.0-rc3
ArchLinux

I am trying to index 3 months log data into ELK, but after some time I see
that kernel is killing the processes because of an OOM error.
http://pastie.org/10155132
I looked at the memory metrics, which was constant around 70% (with 50%
mlocked by ES).

After few minutes of starting logstash, OOM kills ES.

I noticed that /tmp/logstash size is 5.5 GB. logstash ran fine for some
time when I first started indexing.
Is logstash trying to bulk index all these files?

How do I avoid this? can I throttles this some how? If I change this
temporary location, will that be of any help?

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f25c3ec4-8343-4bc9-ba73-e37eb16413f7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Magnus Bäck) #2

On Tuesday, May 05, 2015 at 14:30 CEST,
Pradeep Reddy pradeepreddy.manu.iitkgp@gmail.com wrote:

I have Elasticsearch, Logstash running on a m3.xlarge.
ES 1.5.2
Logstash 1.5.0-rc3
ArchLinux
I am trying to index 3 months log data into ELK, but after some time I
see that kernel is killing the processes because of an OOM error.
http://pastie.org/10155132
I looked at the memory metrics, which was constant around 70% (with
50% mlocked by ES).
After few minutes of starting logstash, OOM kills ES.

Please ask Logstash question on the logstash-users list.
Or, even better, use the recently announced discussion
forum at https://discuss.elastic.co. See
https://groups.google.com/d/topic/elasticsearch/IsYD2ScoE_Y/discussion.

[...]

--
Magnus Bäck | Software Engineer, Development Tools
magnus.back@sonymobile.com | Sony Mobile Communications

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/20150505123742.GA32018%40seldlx20533.corpusers.net.
For more options, visit https://groups.google.com/d/optout.


(Pradeep Reddy) #3

Thanks, I didnt know that there is a separate group. Will post there.

On Tuesday, May 5, 2015 at 6:00:55 PM UTC+5:30, Pradeep Reddy wrote:

I have Elasticsearch, Logstash running on a m3.xlarge.
ES 1.5.2
Logstash 1.5.0-rc3
ArchLinux

I am trying to index 3 months log data into ELK, but after some time I see
that kernel is killing the processes because of an OOM error.
http://pastie.org/10155132
I looked at the memory metrics, which was constant around 70% (with 50%
mlocked by ES).

After few minutes of starting logstash, OOM kills ES.

I noticed that /tmp/logstash size is 5.5 GB. logstash ran fine for some
time when I first started indexing.
Is logstash trying to bulk index all these files?

How do I avoid this? can I throttles this some how? If I change this
temporary location, will that be of any help?

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f91b0fbd-90d0-4220-9dba-008860d6c36a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(system) #4