Question about out of memory


(Jongmin Kim) #1

I'm handling about 300,000,000 log datas with elasticsearch and kibana.

Using AWS instance with 8GB memory, and set es memory as -Xms4g -Xmx4g

It often appears with OutOfMemory error. (java heap memory)

If I use elasticsearch with Hadoop HDFS, like pre- map reducer, will it
help with memory control?

I didn't try yet, just want to know if it is worth to try.

Thank you.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1504d530-9b78-41aa-8280-b5c9bdb04b92%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Binh Ly) #2

In general, faceting, filtering, sorting, and some scripting can consume a
lot of memory. Kibana usually does a lot of term and histogram facets, as
well as sorting. It is likely that you'll need an instance with more
memory, or if not, you need to distribute the indexes/shards around to more
instances to reduce the memory requirements per instance/node.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/74898b73-4b02-4d2e-8095-84d73af3bb1e%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jongmin Kim) #3

I changed elasticsearch.yml

  • index.number_of_shards: 10*
    does it help to increase shards ?
    As I understand about ES Architecture is a shard means a Lucien Thread.

For now, I'm more worring about memory than performance and speed.

2014년 2월 10일 월요일 오후 11시 36분 58초 UTC+9, Binh Ly 님의 말:

In general, faceting, filtering, sorting, and some scripting can consume a
lot of memory. Kibana usually does a lot of term and histogram facets, as
well as sorting. It is likely that you'll need an instance with more
memory, or if not, you need to distribute the indexes/shards around to more
instances to reduce the memory requirements per instance/node.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/77f1146d-f43b-4562-84f6-e892b27f91d0%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jörg Prante) #4

Add more nodes.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEw%3D5w4Dg7NjVOE7K9YTOa3qMVSh0G8v_hRdp0NTXcy%2Bw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #5