Garbage collector logs long passes

Hi,
We are running Elasticsearch 0.90.7 on Linux sever (1 node cluster).
From time to time, Elasticsearch stop responding and the issue looks
related to the Garbage Collector. The log file is shown blow:

[2014-06-16 09:35:48,563][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674153][113273] duration [12.1s], collections
[1]/[12.3s], total [12.1s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[158.2mb]->[95.3mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:35:58,800][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674154][113274] duration [9.9s], collections
[1]/[10.2s], total [9.9s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[95.3mb]->[58.6mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:11,236][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674155][113275] duration [12s], collections
[1]/[12.4s], total [12s]/[17.9h], memory [7.2gb]->[7.3gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[58.6mb]->[138.1mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:23,879][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674156][113276] duration [12.3s], collections
[1]/[12.6s], total [12.3s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[138.1mb]->[113mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:34,043][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674157][113277] duration [9.8s], collections
[1]/[10.1s], total [9.8s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[113mb]->[79mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:46,486][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674158][113278] duration [12.1s], collections
[1]/[12.4s], total [12.1s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[79mb]->[107.2mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:56,649][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674159][113279] duration [9.9s], collections
[1]/[10.1s], total [9.9s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[107.2mb]->[68.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:37:08,995][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674160][113280] duration [12s], collections
[1]/[12.3s], total [12s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[68.7mb]->[79.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

*The garbage collector logs long passes (around 10 seconds). Our system has
total memory of 32G and we set the ES_HEAP_SIZA to be 8G. *
We are almost sure this issue comes from long GC run.
What can we do to prevent this behavior and run ES smoothly ?

Thanks,

Kevin

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Upgrade to a newer version of ES, also upgrade java, and if you can,
increase your heap.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 17 June 2014 21:00, Kevin Qi p8kevinqi@gmail.com wrote:

Hi,
We are running Elasticsearch 0.90.7 on Linux sever (1 node cluster).
From time to time, Elasticsearch stop responding and the issue looks
related to the Garbage Collector. The log file is shown blow:

[2014-06-16 09:35:48,563][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674153][113273] duration [12.1s], collections
[1]/[12.3s], total [12.1s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[158.2mb]->[95.3mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:35:58,800][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674154][113274] duration [9.9s], collections
[1]/[10.2s], total [9.9s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[95.3mb]->[58.6mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:11,236][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674155][113275] duration [12s], collections
[1]/[12.4s], total [12s]/[17.9h], memory [7.2gb]->[7.3gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[58.6mb]->[138.1mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:23,879][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674156][113276] duration [12.3s], collections
[1]/[12.6s], total [12.3s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[138.1mb]->[113mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:34,043][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674157][113277] duration [9.8s], collections
[1]/[10.1s], total [9.8s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[113mb]->[79mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:46,486][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674158][113278] duration [12.1s], collections
[1]/[12.4s], total [12.1s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[79mb]->[107.2mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:56,649][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674159][113279] duration [9.9s], collections
[1]/[10.1s], total [9.9s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[107.2mb]->[68.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:37:08,995][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674160][113280] duration [12s], collections
[1]/[12.3s], total [12s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[68.7mb]->[79.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

*The garbage collector logs long passes (around 10 seconds). Our system
has total memory of 32G and we set the ES_HEAP_SIZA to be 8G. *
We are almost sure this issue comes from long GC run.
What can we do to prevent this behavior and run ES smoothly ?

Thanks,

Kevin

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624bDWg-uQ6wM1nFM8byCZXXbHF9V3L34wV1%2BmmS0LLNoWA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

You likely want to find out whats taking up your heap. The biggest consumer
of heap is fielddata. This will tell you what is in your fielddata and you
can track it back to your code to see where you are using these fields:

curl localhost:9200/_nodes/stats/indices/fielddata/*?pretty

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6b539a31-a164-4398-8154-d87a3f3ee037%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thanks Mark. Any suggestions on how to set the heap? Our machine has s
total RAM of 32 G.

Mark Walkom於 2014年6月17日星期二UTC+8下午7時44分32秒寫道:

Upgrade to a newer version of ES, also upgrade java, and if you can,
increase your heap.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 17 June 2014 21:00, Kevin Qi <p8ke...@gmail.com <javascript:>> wrote:

Hi,
We are running Elasticsearch 0.90.7 on Linux sever (1 node cluster).
From time to time, Elasticsearch stop responding and the issue looks
related to the Garbage Collector. The log file is shown blow:

[2014-06-16 09:35:48,563][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674153][113273] duration [12.1s], collections
[1]/[12.3s], total [12.1s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[158.2mb]->[95.3mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:35:58,800][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674154][113274] duration [9.9s], collections
[1]/[10.2s], total [9.9s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[95.3mb]->[58.6mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:11,236][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674155][113275] duration [12s], collections
[1]/[12.4s], total [12s]/[17.9h], memory [7.2gb]->[7.3gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[58.6mb]->[138.1mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:23,879][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674156][113276] duration [12.3s], collections
[1]/[12.6s], total [12.3s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[138.1mb]->[113mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:34,043][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674157][113277] duration [9.8s], collections
[1]/[10.1s], total [9.8s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[113mb]->[79mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:46,486][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674158][113278] duration [12.1s], collections
[1]/[12.4s], total [12.1s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[79mb]->[107.2mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:56,649][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674159][113279] duration [9.9s], collections
[1]/[10.1s], total [9.9s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[107.2mb]->[68.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:37:08,995][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674160][113280] duration [12s], collections
[1]/[12.3s], total [12s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[68.7mb]->[79.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

*The garbage collector logs long passes (around 10 seconds). Our system
has total memory of 32G and we set the ES_HEAP_SIZA to be 8G. *
We are almost sure this issue comes from long GC run.
What can we do to prevent this behavior and run ES smoothly ?

Thanks,

Kevin

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/594717f0-a099-423d-8d9d-600814fd101e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thanks for your kind help Binh, the result shows :

"indices" : {
"fielddata" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"evictions" : 0,
"fields" : { }
}
}

Is there something wrong?

Binh Ly於 2014年6月17日星期二UTC+8下午10時10分49秒寫道:

You likely want to find out whats taking up your heap. The biggest
consumer of heap is fielddata. This will tell you what is in your fielddata
and you can track it back to your code to see where you are using these
fields:

curl localhost:9200/_nodes/stats/indices/fielddata/*?pretty

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3f0a5026-f695-4345-84a8-a34de4aea4f3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Preferably 50% of system RAM, in your case 16G.

You also really want to upgrade to 1.X, there are a lot of performance
improvements. What version and release of java are you running?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 23 June 2014 20:52, Kevin Qi p8kevinqi@gmail.com wrote:

Thanks Mark. Any suggestions on how to set the heap? Our machine has s
total RAM of 32 G.

Mark Walkom於 2014年6月17日星期二UTC+8下午7時44分32秒寫道:

Upgrade to a newer version of ES, also upgrade java, and if you can,
increase your heap.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 17 June 2014 21:00, Kevin Qi p8ke...@gmail.com wrote:

Hi,
We are running Elasticsearch 0.90.7 on Linux sever (1 node cluster).
From time to time, Elasticsearch stop responding and the issue looks
related to the Garbage Collector. The log file is shown blow:

[2014-06-16 09:35:48,563][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674153][113273] duration [12.1s], collections
[1]/[12.3s], total [12.1s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[158.2mb]->[95.3mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:35:58,800][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674154][113274] duration [9.9s], collections
[1]/[10.2s], total [9.9s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[95.3mb]->[58.6mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:11,236][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674155][113275] duration [12s], collections
[1]/[12.4s], total [12s]/[17.9h], memory [7.2gb]->[7.3gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[58.6mb]->[138.1mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:23,879][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674156][113276] duration [12.3s], collections
[1]/[12.6s], total [12.3s]/[17.9h], memory [7.3gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[138.1mb]->[113mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:34,043][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674157][113277] duration [9.8s], collections
[1]/[10.1s], total [9.8s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[113mb]->[79mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:46,486][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674158][113278] duration [12.1s], collections
[1]/[12.4s], total [12.1s]/[17.9h], memory [7.2gb]->[7.2gb]/[7.9gb],
all_pools {[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[79mb]->[107.2mb]/[665.6mb]}{[Par Survivor Space] [0b]->[0b]/[83.1mb]}{[CMS
Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:36:56,649][INFO ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674159][113279] duration [9.9s], collections
[1]/[10.1s], total [9.9s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[107.2mb]->[68.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

[2014-06-16 09:37:08,995][WARN ][monitor.jvm ] [node01]
[gc][ConcurrentMarkSweep][1674160][113280] duration [12s], collections
[1]/[12.3s], total [12s]/[18h], memory [7.2gb]->[7.2gb]/[7.9gb], all_pools
{[Code Cache] [15.5mb]->[15.5mb]/[48mb]}{[Par Eden Space]
[68.7mb]->[79.7mb]/[665.6mb]}{[Par Survivor Space]
[0b]->[0b]/[83.1mb]}{[CMS Old Gen] [7.1gb]->[7.1gb]/[7.1gb]}{[CMS Perm Gen]
[34.7mb]->[34.7mb]/[82mb]}

*The garbage collector logs long passes (around 10 seconds). Our system
has total memory of 32G and we set the ES_HEAP_SIZA to be 8G. *
We are almost sure this issue comes from long GC run.
What can we do to prevent this behavior and run ES smoothly ?

Thanks,

Kevin

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/bac7e7fb-e166-457f-89af-e832ce76010d%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/594717f0-a099-423d-8d9d-600814fd101e%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/594717f0-a099-423d-8d9d-600814fd101e%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624bEtNUcfaF-BrEay1x%3DUG2VmV58ev77JfUe8T66QNQU0g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.