Java OOM error on a 64G - 8 Core CPU server

Hello, Gurus

I know this must be an old question. I have a ES instance*(Kibana+ES+Fluentd as log analyse system, indexing nginx access log)
*running on 64G - 8 core CPU VM with DEFAULT configuration. We create a
index every, around 2G size. After a week, the ES stopped indexing after I
fired a query. The error log said lucene fired OOM :frowning: Seemed all 5 shards
are marked as dead.

As I restarted the ES this morning, i works well again, currently the JVM
status is

"jvm":{"timestamp":1379324673375,"uptime":"6 hours, 33 minutes, 59 seconds
and 453 milliseconds","uptime_in_millis":23639453,
"mem":{*
"heap_used":"478.2mb","heap_used_in_bytes":501468512,"heap_committed":"759.5mb","heap_committed_in_bytes":796393472,"non_heap_used":"41.5mb","non_heap_used_in_bytes":43601016,"non_heap_committed":"63.8mb","non_heap_committed_in_bytes":66994176,
*
"pools":{"Code
Cache":{"used":"8.4mb","used_in_bytes":8910400,"max":"48mb","max_in_bytes":50331648,"peak_used":"8.5mb","peak_used_in_bytes":8931904,"peak_max":"48mb","peak_max_in_bytes":50331648},"Par
Eden
Space":{"used":"20mb","used_in_bytes":21053344,"max":"273mb","max_in_bytes":286326784,"peak_used":"68.3mb","peak_used_in_bytes":71630848,"peak_max":"273mb","peak_max_in_bytes":286326784},"Par
Survivor
Space":{"used":"5.1mb","used_in_bytes":5382312,"max":"34.1mb","max_in_bytes":35782656,"peak_used":"8.5mb","peak_used_in_bytes":8912896,"peak_max":"34.1mb","peak_max_in_bytes":35782656},"CMS
Old
Gen":{"used":"453mb","used_in_bytes":475032856,"max":"682.6mb","max_in_bytes":715849728,"peak_used":"513.1mb","peak_used_in_bytes":538113400,"peak_max":"682.6mb","peak_max_in_bytes":715849728},"CMS
Perm
Gen":{"used":"33mb","used_in_bytes":34690616,"max":"82mb","max_in_bytes":85983232,"peak_used":"33mb","peak_used_in_bytes":34690616,"peak_max":"82mb","peak_max_in_bytes":85983232}}},
"threads":{"count":200,"peak_count":203},
"gc":{"collection_count":3796,"collection_time":"16 seconds and 862
milliseconds","collection_time_in_millis":16862,"collectors":{"ParNew":{"collection_count":3755,"collection_time":"15
seconds and 679
milliseconds","collection_time_in_millis":15679},"ConcurrentMarkSweep":{"collection_count":41,"collection_time":"1
second and 183 milliseconds","collection_time_in_millis":1183}}},
"buffer_pools":{"direct":{"count":242,"used":"46.6mb","used_in_bytes":48868573,"total_capacity":"46.6mb","total_capacity_in_bytes":48868573},"mapped":{"count":0,"used":"0b","used_in_bytes":0,"total_capacity":"0b","total_capacity_in_bytes":0}}}

My Question is, giving JVM heap 32G RAM, could it be a good idea to solve
this issue? simply modify "elasticsearch.yml" and the heap size option?

JAVA LOG

[2013-09-15 10:34:12,028][DEBUG][action.search.type ] [Atom Bob]
[logstash-2013.09.14][2], node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED]:
Failed to execute [org.elasticsearch.action.search.SearchRequest@5a8f717c]

java.lang.OutOfMemoryError: Java heap space

  • at org.apache.lucene.util.FixedBitSet.<init>(FixedBitSet.java:55)*
    
  • at 
    

org.elasticsearch.common.lucene.search.XBooleanFilter.getDocIdSet(XBooleanFilter.java:155)
*

  • at 
    

org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)
*

  • at 
    

org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)*

  • at 
    

org.apache.lucene.search.QueryWrapperFilter$1.iterator(QueryWrapperFilter.java:60)
*

  • at 
    

org.elasticsearch.common.lucene.docset.DocIdSets.toSafeBits(DocIdSets.java:129)
*

  • at 
    

org.elasticsearch.common.lucene.search.FilteredCollector.setNextReader(FilteredCollector.java:69)
*

  • at 
    

org.elasticsearch.common.lucene.MultiCollector.setNextReader(MultiCollector.java:68)
*

  • at 
    

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:615)*

  • at 
    

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)
*

  • at 
    

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)*

  • at 
    

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)*

  • at 
    

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)*

  • at 
    

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)*

  • at 
    

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)*

  • at 
    

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:243)
*

  • at 
    

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:141)
*

  • at 
    

org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
*

  • at 
    

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:212)
*

  • at 
    

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:199)
*

  • at 
    

org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:185)
*

  • at 
    

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
*

  • at 
    

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
*

  • at java.lang.Thread.run(Thread.java:724)*
    

[2013-09-15 10:34:12,028][DEBUG][action.search.type ] [Atom Bob]
[logstash-2013.09.14][2], node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED]:
Failed to execute [org.elasticsearch.action.search.SearchRequest@58672471]

java.lang.OutOfMemoryError: Java heap space

[2013-09-15 10:34:09,061][WARN ][index.engine.robin ] [Atom Bob]
[logstash-2013.09.15][0] failed engine

java.lang.OutOfMemoryError: Java heap space

  • at 
    

org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:51)
*

  • at 
    

org.elasticsearch.common.io.stream.BytesStreamOutput.(BytesStreamOutput.java:47)
*

  • at 
    

org.elasticsearch.index.translog.fs.FsTranslog.add(FsTranslog.java:333)*

  • at 
    

org.elasticsearch.index.engine.robin.RobinEngine.innerCreate(RobinEngine.java:473)
*

  • at 
    

org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:365)
*

  • at 
    

org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:319)
*

  • at 
    

org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:402)
*

  • at 
    

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
*

  • at 
    

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:521)
*

  • at 
    

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
*

  • at 
    

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
*

  • at 
    

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
*

  • at java.lang.Thread.run(Thread.java:724)*
    

[2013-09-15 10:35:05,835][WARN ][http.netty ] [Atom Bob]
Caught exception while handling client http traffic, closing connection
[id: 0x0e15a6b0, /127.0.0.1:15493 => /127.0.0.1:9200]

java.lang.OutOfMemoryError: Java heap space

[2013-09-15 10:35:05,832][DEBUG][action.bulk ] [Atom Bob]
[logstash-2013.09.15][3] failed to execute bulk item (index) index
{[logstash-2013.09.15][fluentd][LGxqK8MfS66N5YvrmlOK4w],
source[{"test":"Stack
trace:","_key":"error.web5","@timestamp":"2013-09-15T10:33:28+08:00"}]}

org.elasticsearch.index.engine.CreateFailedEngineException:
[logstash-2013.09.15][3] Create failed for [fluentd#LGxqK8MfS66N5YvrmlOK4w]

  • at 
    

org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:378)
*

  • at 
    

org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:319)
*

  • at 
    

org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:402)
*

  • at 
    

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
*

  • at 
    

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:521)
*

  • at 
    

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
*

  • at 
    

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
*

  • at 
    

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
*

  • at java.lang.Thread.run(Thread.java:724)*
    

Caused by: org.apache.lucene.store.AlreadyClosedException: this
ReferenceManager is closed

  • at 
    

org.apache.lucene.search.ReferenceManager.acquire(ReferenceManager.java:97)*

  • at 
    

org.elasticsearch.index.engine.robin.RobinEngine.searcher(RobinEngine.java:739)
*

  • at 
    

org.elasticsearch.index.engine.robin.RobinEngine.loadCurrentVersionFromIndex(RobinEngine.java:1319)
*

  • at 
    

org.elasticsearch.index.engine.robin.RobinEngine.innerCreate(RobinEngine.java:391)
*

  • at 
    

org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:365)
*

  • ... 8 more*
    

......

Caused by: java.lang.OutOfMemoryError: Java heap space

[2013-09-15 11:03:52,847][WARN ][cluster.action.shard ] [Atom Bob]
sending failed shard for [logstash-2013.09.15][2],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:52,848][WARN ][cluster.action.shard ] [Atom Bob]
received shard failed for [logstash-2013.09.15][2],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:52,850][WARN ][cluster.action.shard ] [Atom Bob]
sending failed shard for [logstash-2013.09.15][0],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:52,850][WARN ][cluster.action.shard ] [Atom Bob]
received shard failed for [logstash-2013.09.15][0],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:53,387][WARN ][cluster.action.shard ] [Atom Bob]
sending failed shard for [logstash-2013.09.15][4],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:53,387][WARN ][cluster.action.shard ] [Atom Bob]
received shard failed for [logstash-2013.09.15][4],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:53,389][WARN ][cluster.action.shard ] [Atom Bob]
sending failed shard for [logstash-2013.09.15][1],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:53,389][WARN ][cluster.action.shard ] [Atom Bob]
received shard failed for [logstash-2013.09.15][1],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [engine failure,
message [OutOfMemoryError[Java heap space]]]

[2013-09-15 11:03:53,398][WARN ][indices.cluster ] [Atom Bob]
[logstash-2013.09.15][0] master [[Atom
Bob][uT_403fVTT-oI524d7hK1Q][inet[/10.50.1.79:9300]]] marked shard as
started, but shard has not been created, mark shard as failed

[2013-09-15 11:03:53,399][WARN ][cluster.action.shard ] [Atom Bob]
sending failed shard for [logstash-2013.09.15][0],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [master [Atom
Bob][uT_403fVTT-oI524d7hK1Q][inet[/10.50.1.79:9300]] marked shard as
started, but shard has not been created, mark shard as failed]

[2013-09-15 11:03:53,399][WARN ][cluster.action.shard ] [Atom Bob]
received shard failed for [logstash-2013.09.15][0],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [master [Atom
Bob][uT_403fVTT-oI524d7hK1Q][inet[/10.50.1.79:9300]] marked shard as
started, but shard has not been created, mark shard as failed]

[2013-09-15 11:03:53,399][WARN ][indices.cluster ] [Atom Bob]
[logstash-2013.09.15][1] master [[Atom
Bob][uT_403fVTT-oI524d7hK1Q][inet[/10.50.1.79:9300]]] marked shard as
started, but shard has not been created, mark shard as failed

[2013-09-15 11:03:53,400][WARN ][cluster.action.shard ] [Atom Bob]
sending failed shard for [logstash-2013.09.15][1],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [master [Atom
Bob][uT_403fVTT-oI524d7hK1Q][inet[/10.50.1.79:9300]] marked shard as
started, but shard has not been created, mark shard as failed]

**

[2013-09-15 11:03:53,400][WARN ][cluster.action.shard ] [Atom Bob]
received shard failed for [logstash-2013.09.15][1],
node[uT_403fVTT-oI524d7hK1Q], [P], s[STARTED], reason [master [Atom
Bob][uT_403fVTT-oI524d7hK1Q][inet[/10.50.1.79:9300]] marked shard as
started, but shard has not been created, mark shard as failed]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Xie Lebing wrote:

My Question is, giving JVM heap 32G RAM, could it be a good idea
to solve this issue? simply modify "elasticsearch.yml" and the
heap size option?

You indeed want to set the heap to something, and half the machine
RAM is a good starting point. In your case though I would set it
31g because the JVM will uncompress object pointers starting
around 32g and you'll actually lose space. Use the environment
variable ES_HEAP_SIZE in whatever process starts Elasticsearch.

ES_HEAP_SIZE=31g bin/elasticsearch ...

Drew

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thank you for reply. with the default heap option (1G), I hit the OOM error
again and JVM info looks like:

{"cluster_name":"elasticsearch","nodes":{"rG-uC_WtS0yA2oER-hSv3w":{"timestamp":1379823196938,"name":"Iron
Maiden","transport_address":"inet[/XXXXXXX:9300]","hostname":"XXXXX","jvm":{"timestamp":1379823196938,"uptime":"145
hours, 2 minutes, 43 seconds and 16
milliseconds","uptime_in_millis":522163016,"mem*":{"heap_used":"959.6mb","heap_used_in_bytes":1006288952,"heap_committed":"989.8mb","heap_committed_in_bytes":1037959168,"non_heap_used":"43.3mb","non_heap_used_in_bytes":45420880,"non_heap_committed":"66.2mb","non_heap_committed_in_bytes":69423104,"pools":{"Code
Cache":{"used":"9.7mb","used_in_bytes":10245568,"max":"48mb","max_in_bytes":50331648,"peak_used":"9.8mb","peak_used_in_bytes":10361792,"peak_max":"48mb","peak_max_in_bytes":50331648},
*"Par Eden
Space":{"used":"273mb","used_in_bytes":286326776,"max":"273mb","max_in_bytes":286326784,"peak_used":"273mb","peak_used_in_bytes":286326784,"peak_max":"273mb","peak_max_in_bytes":286326784},"Par
Survivor
Space":{"used":"4.3mb","used_in_bytes":4582320,"max":"34.1mb","max_in_bytes":35782656,"peak_used":"34.1mb","peak_used_in_bytes":35782656,"peak_max":"34.1mb","peak_max_in_bytes":35782656},"CMS
Old
Gen":{"used":"682.2mb","used_in_bytes":715380344,"max":"682.6mb","max_in_bytes":715849728,"peak_used":"682.6mb","peak_used_in_bytes":715849728,"peak_max":"682.6mb","peak_max_in_bytes":715849728},"CMS
Perm
Gen":{"used":"33.5mb","used_in_bytes":35175312,"max":"82mb","max_in_bytes":85983232,"peak_used":"33.5mb","peak_used_in_bytes":35181568,"peak_max":"82mb","peak_max_in_bytes":85983232}}},"threads":{"count":186,"peak_count":217},"gc":{"collection_count":158700,"collection_time":"10
hours, 31 minutes, 2 seconds and 255
milliseconds","collection_time_in_millis":37862255,"collectors":{"ParNew":{"collection_count":43964,"collection_time":"3
minutes, 11 seconds and 945
milliseconds","collection_time_in_millis":191945},"ConcurrentMarkSweep":{"collection_count":114736,"collection_time":"10
hours, 27 minutes, 50 seconds and 310
milliseconds","collection_time_in_millis":37670310}}},"buffer_pools":{"direct":{"count":245,"used":"70.7mb","used_in_bytes":74193116,"total_capacity":"70.7mb","total_capacity_in_bytes":74193116},"mapped":{"count":0,"used":"0b","used_in_bytes":0,"total_capacity":"0b","total_capacity_in_bytes":0}}}}}}

Set the heap size to 20-30G could solve it right? Thanks

2013/9/16 Drew Raines aaraines@gmail.com

Xie Lebing wrote:

My Question is, giving JVM heap 32G RAM, could it be a good idea to solve

this issue? simply modify "elasticsearch.yml" and the heap size option?

You indeed want to set the heap to something, and half the machine RAM is
a good starting point. In your case though I would set it 31g because the
JVM will uncompress object pointers starting around 32g and you'll actually
lose space. Use the environment variable ES_HEAP_SIZE in whatever process
starts Elasticsearch.

ES_HEAP_SIZE=31g bin/elasticsearch ...

Drew

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@**googlegroups.comelasticsearch%2Bunsubscribe@googlegroups.com
.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
发自我的 iSpaceship
http://cn.linkedin.com/in/xielebing

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Guys, I decided to extend Heap size to 4G and modify the Java Option like

JAVA_OPTS="$JAVA_OPTS -server -Xms4096m -Xmx4096g -Xmn2048m-Djava.awt.headless=true -XX:PermSize=256m -XX:MaxPermSize=256m
-XX:ParallelGCThreads=8 -XX:SurvivorRatio=1

8 -Xnoclassgc -XX:MaxTenuringThreshold=10 -XX:+DisableExplicitGC
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection
-XX:CMSFullGCsBeforeCompa

ction=5 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled
-XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0
-XX:+PrintGCDetails -XX:+PrintG

CTimeStamps -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch
-Des.foreground=yes -Des.path.home=/var/elasticsearch -cp
:/var/elasticsearch/lib/elasticsearch-0.90.3.jar:

/var/elasticsearch/lib/:/var/elasticsearch/lib/sigar/*
org.elasticsearch.bootstrap.Elasticsearch"*

After I restart the ES, the server crashed. See attched top. Seemed Java
consumed all the memory. Any tips about this issue? thanks you!

Sep 22 19:45:13 srv-log3 kernel: INFO: task java:6634 blocked for more than
120 seconds.

Sep 22 19:45:13 srv-log3 kernel: "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.

Sep 22 19:45:13 srv-log3 kernel: java D ffffffff80157f06 0
6634 1 6635 6633 (NOTLB)

Sep 22 19:45:13 srv-log3 kernel: ffff810802407e18 0000000000000082
0000000000000001 000000000b398800

Sep 22 19:45:13 srv-log3 kernel: 00000000ffffffda 0000000000000007
ffff81082df38830 ffff81102a7947f0

Sep 22 19:45:13 srv-log3 kernel: 000002f272b170c3 0000000000009da0
ffff81082df38a18 0000000200000000

Sep 22 19:45:13 srv-log3 kernel: Call Trace:

Sep 22 19:45:13 srv-log3 kernel: [] __down_read+0x7a/0x92

Sep 22 19:45:13 srv-log3 kernel: []
do_page_fault+0x414/0x842

Sep 22 19:45:13 srv-log3 kernel: []
thread_return+0x62/0xfe

Sep 22 19:45:13 srv-log3 kernel: [] error_exit+0x0/0x84

On Sunday, September 22, 2013 12:58:51 PM UTC+8, Xie Lebing wrote:

Thank you for reply. with the default heap option (1G), I hit the OOM
error again and JVM info looks like:

{"cluster_name":"elasticsearch","nodes":{"rG-uC_WtS0yA2oER-hSv3w":{"timestamp":1379823196938,"name":"Iron
Maiden","transport_address":"inet[/XXXXXXX:9300]","hostname":"XXXXX","jvm":{"timestamp":1379823196938,"uptime":"145
hours, 2 minutes, 43 seconds and 16
milliseconds","uptime_in_millis":522163016,"mem*":{"heap_used":"959.6mb","heap_used_in_bytes":1006288952,"heap_committed":"989.8mb","heap_committed_in_bytes":1037959168,"non_heap_used":"43.3mb","non_heap_used_in_bytes":45420880,"non_heap_committed":"66.2mb","non_heap_committed_in_bytes":69423104,"pools":{"Code
Cache":{"used":"9.7mb","used_in_bytes":10245568,"max":"48mb","max_in_bytes":50331648,"peak_used":"9.8mb","peak_used_in_bytes":10361792,"peak_max":"48mb","peak_max_in_bytes":50331648},
*"Par Eden
Space":{"used":"273mb","used_in_bytes":286326776,"max":"273mb","max_in_bytes":286326784,"peak_used":"273mb","peak_used_in_bytes":286326784,"peak_max":"273mb","peak_max_in_bytes":286326784},"Par
Survivor
Space":{"used":"4.3mb","used_in_bytes":4582320,"max":"34.1mb","max_in_bytes":35782656,"peak_used":"34.1mb","peak_used_in_bytes":35782656,"peak_max":"34.1mb","peak_max_in_bytes":35782656},"CMS
Old
Gen":{"used":"682.2mb","used_in_bytes":715380344,"max":"682.6mb","max_in_bytes":715849728,"peak_used":"682.6mb","peak_used_in_bytes":715849728,"peak_max":"682.6mb","peak_max_in_bytes":715849728},"CMS
Perm
Gen":{"used":"33.5mb","used_in_bytes":35175312,"max":"82mb","max_in_bytes":85983232,"peak_used":"33.5mb","peak_used_in_bytes":35181568,"peak_max":"82mb","peak_max_in_bytes":85983232}}},"threads":{"count":186,"peak_count":217},"gc":{"collection_count":158700,"collection_time":"10
hours, 31 minutes, 2 seconds and 255
milliseconds","collection_time_in_millis":37862255,"collectors":{"ParNew":{"collection_count":43964,"collection_time":"3
minutes, 11 seconds and 945
milliseconds","collection_time_in_millis":191945},"ConcurrentMarkSweep":{"collection_count":114736,"collection_time":"10
hours, 27 minutes, 50 seconds and 310
milliseconds","collection_time_in_millis":37670310}}},"buffer_pools":{"direct":{"count":245,"used":"70.7mb","used_in_bytes":74193116,"total_capacity":"70.7mb","total_capacity_in_bytes":74193116},"mapped":{"count":0,"used":"0b","used_in_bytes":0,"total_capacity":"0b","total_capacity_in_bytes":0}}}}}}

Set the heap size to 20-30G could solve it right? Thanks

2013/9/16 Drew Raines aaraines@gmail.com

Xie Lebing wrote:

My Question is, giving JVM heap 32G RAM, could it be a good idea to

solve this issue? simply modify "elasticsearch.yml" and the heap size
option?

You indeed want to set the heap to something, and half the machine RAM is
a good starting point. In your case though I would set it 31g because the
JVM will uncompress object pointers starting around 32g and you'll actually
lose space. Use the environment variable ES_HEAP_SIZE in whatever process
starts Elasticsearch.

ES_HEAP_SIZE=31g bin/elasticsearch ...

Drew

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@**googlegroups.comelasticsearch%2Bunsubscribe@googlegroups.com
.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
发自我的 iSpaceship
http://cn.linkedin.com/in/xielebing

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Please look more carefully at your options to follow Drew's suggestions.

  • ES_HEAP_SIZE=31g, this sets -Xmx31g -Xms31g (and I recommend mmapfs and
    bootstrap mlockall in this case)

  • do not set -Xmn

  • do not set -Xmx4096g unless you have 8 terabytes of memory

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.