Elasticsearch JVM memory not released after running facet browsing


(RLeyba) #1

Hi,

I am prototyping elasticsearch (with Kibana and logstash) to index our
massive syslog data for which one month's total is about 1.3 billion
records. I have (for the moment) a two node cluster with 48GB RAM on Node
A and 32 GB RAM on Node B. I have used doc_values for the two most
important fields I would run facets on which are fieldnames: hostname and
message. My observation is that if I do terms faceting on these two fields
then the cluster returns results and the JVM heap indicator on my KOPF
plugin stays stable... but if I do facets on non doc_values fields, then
the JVM heap of both nodes jumps to red and STAYS red, even though I stop
doing further queries. In fact, the next day the JVM heap memory for both
nodes are still red and I have to restart each node one by one to bring
them back to original JVM levels.

How do I "recover" the memory from the JVMs and have them release what
appears to be stuck threads?

Thanks very much.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/45a1f839-9de8-4f3a-aa08-433a78095dd2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(David Severski) #2

Did you find any answer for this issue? I also see queries from Kibana
bringing my heap status up to red (per kopf) and never returning. I'd
expect GC to eventually purge out the caching that I assume ES is
performing, but that doesn't seem to be occuring.

David

On Monday, May 26, 2014 3:29:31 AM UTC-7, RLeyba wrote:

Hi,

I am prototyping elasticsearch (with Kibana and logstash) to index our
massive syslog data for which one month's total is about 1.3 billion
records. I have (for the moment) a two node cluster with 48GB RAM on Node
A and 32 GB RAM on Node B. I have used doc_values for the two most
important fields I would run facets on which are fieldnames: hostname and
message. My observation is that if I do terms faceting on these two fields
then the cluster returns results and the JVM heap indicator on my KOPF
plugin stays stable... but if I do facets on non doc_values fields, then
the JVM heap of both nodes jumps to red and STAYS red, even though I stop
doing further queries. In fact, the next day the JVM heap memory for both
nodes are still red and I have to restart each node one by one to bring
them back to original JVM levels.

How do I "recover" the memory from the JVMs and have them release what
appears to be stuck threads?

Thanks very much.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/fed3e8b3-367f-4081-8d88-02aa751a6861%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(system) #3