Ramifications of G1GC in ES1.3 with JDK 1.8

I had been hitting my head up against heap issues until this afternoon
after enabling G1GC.

What are the known issues with this type of GC?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Java 8 / G1GC works well here, what issues do you have?

Jörg

On Wed, Sep 10, 2014 at 8:13 PM, Robert Gardam robert.gardam@fyber.com
wrote:

I had been hitting my head up against heap issues until this afternoon
after enabling G1GC.

What are the known issues with this type of GC?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Yeah, we are seeing much better gc performance here too!

We were experiencing the stop the world GC with CMS and then nodes would
time out.

Our heap is 32gb, we run two nodes per system. IO doesn't seem to be an
issue here.
20 nodes = 10 boxes, 128gb ram each

Field cache is limited to 20%
We're bulk indexing around 10k events/s

It seems to be much more stable and predictable in terms of GC. The GC Logs
are showing a huge reductions

61043,866: Total time for which application threads were stopped: 0,0140156
seconds
61044,524: Total time for which application threads were stopped: 0,0005284
seconds
61045,801: Total time for which application threads were stopped: 0,0006138
seconds
61045,802: Total time for which application threads were stopped: 0,0003635
seconds
61045,802: Total time for which application threads were stopped: 0,0002545
seconds
61045,803: Total time for which application threads were stopped: 0,0002944
seconds
61045,804: Total time for which application threads were stopped: 0,0002367
seconds
61046,469: Total time for which application threads were stopped: 0,0004653
seconds
61048,172: Total time for which application threads were stopped: 0,0004850
seconds
61048,598: Total time for which application threads were stopped: 0,0004937
seconds
61049,197: Total time for which application threads were stopped: 0,0004396
seconds
61050,264: Total time for which application threads were stopped: 0,0004587
seconds
61051,593: Total time for which application threads were stopped: 0,0004600
seconds
61051,689: Total time for which application threads were stopped: 0,0005021
seconds
61053,822: Total time for which application threads were stopped: 0,0004721
seconds
61053,824: Total time for which application threads were stopped: 0,0005323
seconds
61053,825: Total time for which application threads were stopped: 0,0003403
seconds
61053,825: Total time for which application threads were stopped: 0,0003301
seconds
61053,826: Total time for which application threads were stopped: 0,0003322
seconds
61053,826: Total time for which application threads were stopped: 0,0003364
seconds
61059,265: Total time for which application threads were stopped: 0,0004321
seconds
61061,691: Total time for which application threads were stopped: 0,0004619
seconds
61062,595: Total time for which application threads were stopped: 0,0004529
seconds
61064,199: Total time for which application threads were stopped: 0,0004587
seconds
61070,267: Total time for which application threads were stopped: 0,0004606
seconds
61074,200: Total time for which application threads were stopped: 0,0004508
seconds
61076,693: Total time for which application threads were stopped: 0,0004709
seconds
61077,597: Total time for which application threads were stopped: 0,0004698
seconds
61079,268: Total time for which application threads were stopped: 0,0004601
seconds
61079,817: Total time for which application threads were stopped: 0,0004535
seconds
61081,818: Total time for which application threads were stopped: 0,0004979
seconds
61082,819: Total time for which application threads were stopped: 0,0004817
seconds
61089,204: Total time for which application threads were stopped: 0,0011584
seconds
61091,699: Total time for which application threads were stopped: 0,0004501
seconds
61092,599: Total time for which application threads were stopped: 0,0004539
seconds
61094,204: Total time for which application threads were stopped: 0,0006452
seconds
61095,271: Total time for which application threads were stopped: 0,0006568
seconds
61101,701: Total time for which application threads were stopped: 0,0004679
seconds
61102,601: Total time for which application threads were stopped: 0,0004576
seconds
61104,272: Total time for which application threads were stopped: 0,0004474
seconds
61114,207: Total time for which application threads were stopped: 0,0005483
seconds
61115,273: Total time for which application threads were stopped: 0,0004848
seconds
61117,604: Total time for which application threads were stopped: 0,0008780
seconds
61117,703: Total time for which application threads were stopped: 0,0005068
seconds
61124,274: Total time for which application threads were stopped: 0,0004519
seconds
61127,605: Total time for which application threads were stopped: 0,0004786
seconds
61129,249: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 1291845632 bytes, new threshold 15 (max 15)

  • age 1: 19633616 bytes, 19633616 total
  • age 2: 3428232 bytes, 23061848 total
  • age 3: 1362152 bytes, 24424000 total
  • age 4: 1443728 bytes, 25867728 total
  • age 5: 996840 bytes, 26864568 total
  • age 6: 1584400 bytes, 28448968 total
  • age 7: 1697168 bytes, 30146136 total
  • age 8: 578056 bytes, 30724192 total
  • age 9: 1166056 bytes, 31890248 total
  • age 10: 8904 bytes, 31899152 total
  • age 11: 31640 bytes, 31930792 total
  • age 12: 979168 bytes, 32909960 total
  • age 13: 108016 bytes, 33017976 total
  • age 14: 15376 bytes, 33033352 total
  • age 15: 755696 bytes, 33789048 total
    , 0,0134055 secs]
    [Parallel Time: 8,0 ms, GC Workers: 10]
    [GC Worker Start (ms): Min: 61129248,9, Avg: 61129249,0, Max:
    61129249,0, Diff: 0,1]
    [Ext Root Scanning (ms): Min: 0,4, Avg: 0,7, Max: 2,0, Diff: 1,6,
    Sum: 7,4]
    [Update RS (ms): Min: 0,0, Avg: 0,9, Max: 1,3, Diff: 1,3, Sum: 8,7]
    [Processed Buffers: Min: 0, Avg: 14,6, Max: 38, Diff: 38, Sum: 146]
    [Scan RS (ms): Min: 0,1, Avg: 0,4, Max: 0,6, Diff: 0,5, Sum: 3,9]
    [Code Root Scanning (ms): Min: 0,0, Avg: 0,0, Max: 0,0, Diff: 0,0,
    Sum: 0,2]
    [Object Copy (ms): Min: 5,7, Avg: 5,7, Max: 5,8, Diff: 0,1, Sum: 57,5]
    [Termination (ms): Min: 0,0, Avg: 0,1, Max: 0,1, Diff: 0,1, Sum: 0,7]
    [GC Worker Other (ms): Min: 0,0, Avg: 0,0, Max: 0,0, Diff: 0,0, Sum:
    0,2]
    [GC Worker Total (ms): Min: 7,8, Avg: 7,9, Max: 7,9, Diff: 0,1, Sum:
    78,6]
    [GC Worker End (ms): Min: 61129256,8, Avg: 61129256,8, Max:
    61129256,9, Diff: 0,0][Code Root Fixup: 0,1 ms][Code Root Migration: 0,2 ms]
    [Clear CT: 2,0 ms]
    [Other: 3,1 ms]
    [Choose CSet: 0,0 ms]
    [Ref Proc: 1,6 ms]
    [Ref Enq: 0,1 ms]
    [Free CSet: 1,3 ms]
    [Eden: 19,1G(19,1G)->0,0B(19,1G) Survivors: 64,0M->80,0M Heap:
    25,3G(32,0G)->6337,0M(32,0G)]
    [Times: user=0,10 sys=0,01, real=0,01 secs]

I unfortunately don't have the gc logs from when we were using cms, only
entries in the es logs.

[gc][old][420117][9522] duration [46.2s], collections [2]/[47s], total
[46.2s]/[16.8m], memory [31.6gb]->[30.3gb]/[31.9gb], all_pools {[young]
[490.1mb]->[56.6mb]/[665.6mb]}{[survivor] [83.1mb]->[0b]/[83.1mb]}{[old]
[31gb]->[30.3gb]/[31.1gb]}

and this never reclaimed as much space as it does now.

On Thu, Sep 11, 2014 at 9:16 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

Java 8 / G1GC works well here, what issues do you have?

Jörg

On Wed, Sep 10, 2014 at 8:13 PM, Robert Gardam robert.gardam@fyber.com
wrote:

I had been hitting my head up against heap issues until this afternoon
after enabling G1GC.

What are the known issues with this type of GC?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6gum8WWaqsM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAN%3D%2BWfpUvHnv71Wzh33KjejQmuPdr8md90Jn_OMTGdMeAeZ4YQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Nice to see another success of G1 GC :slight_smile:

Jörg

On Thu, Sep 11, 2014 at 11:34 AM, Robert Gardam robert.gardam@fyber.com
wrote:

Yeah, we are seeing much better gc performance here too!

We were experiencing the stop the world GC with CMS and then nodes would
time out.

Our heap is 32gb, we run two nodes per system. IO doesn't seem to be an
issue here.
20 nodes = 10 boxes, 128gb ram each

Field cache is limited to 20%
We're bulk indexing around 10k events/s

It seems to be much more stable and predictable in terms of GC. The GC
Logs are showing a huge reductions

61043,866: Total time for which application threads were stopped:
0,0140156 seconds
61044,524: Total time for which application threads were stopped:
0,0005284 seconds
61045,801: Total time for which application threads were stopped:
0,0006138 seconds
61045,802: Total time for which application threads were stopped:
0,0003635 seconds
61045,802: Total time for which application threads were stopped:
0,0002545 seconds
61045,803: Total time for which application threads were stopped:
0,0002944 seconds
61045,804: Total time for which application threads were stopped:
0,0002367 seconds
61046,469: Total time for which application threads were stopped:
0,0004653 seconds
61048,172: Total time for which application threads were stopped:
0,0004850 seconds
61048,598: Total time for which application threads were stopped:
0,0004937 seconds
61049,197: Total time for which application threads were stopped:
0,0004396 seconds
61050,264: Total time for which application threads were stopped:
0,0004587 seconds
61051,593: Total time for which application threads were stopped:
0,0004600 seconds
61051,689: Total time for which application threads were stopped:
0,0005021 seconds
61053,822: Total time for which application threads were stopped:
0,0004721 seconds
61053,824: Total time for which application threads were stopped:
0,0005323 seconds
61053,825: Total time for which application threads were stopped:
0,0003403 seconds
61053,825: Total time for which application threads were stopped:
0,0003301 seconds
61053,826: Total time for which application threads were stopped:
0,0003322 seconds
61053,826: Total time for which application threads were stopped:
0,0003364 seconds
61059,265: Total time for which application threads were stopped:
0,0004321 seconds
61061,691: Total time for which application threads were stopped:
0,0004619 seconds
61062,595: Total time for which application threads were stopped:
0,0004529 seconds
61064,199: Total time for which application threads were stopped:
0,0004587 seconds
61070,267: Total time for which application threads were stopped:
0,0004606 seconds
61074,200: Total time for which application threads were stopped:
0,0004508 seconds
61076,693: Total time for which application threads were stopped:
0,0004709 seconds
61077,597: Total time for which application threads were stopped:
0,0004698 seconds
61079,268: Total time for which application threads were stopped:
0,0004601 seconds
61079,817: Total time for which application threads were stopped:
0,0004535 seconds
61081,818: Total time for which application threads were stopped:
0,0004979 seconds
61082,819: Total time for which application threads were stopped:
0,0004817 seconds
61089,204: Total time for which application threads were stopped:
0,0011584 seconds
61091,699: Total time for which application threads were stopped:
0,0004501 seconds
61092,599: Total time for which application threads were stopped:
0,0004539 seconds
61094,204: Total time for which application threads were stopped:
0,0006452 seconds
61095,271: Total time for which application threads were stopped:
0,0006568 seconds
61101,701: Total time for which application threads were stopped:
0,0004679 seconds
61102,601: Total time for which application threads were stopped:
0,0004576 seconds
61104,272: Total time for which application threads were stopped:
0,0004474 seconds
61114,207: Total time for which application threads were stopped:
0,0005483 seconds
61115,273: Total time for which application threads were stopped:
0,0004848 seconds
61117,604: Total time for which application threads were stopped:
0,0008780 seconds
61117,703: Total time for which application threads were stopped:
0,0005068 seconds
61124,274: Total time for which application threads were stopped:
0,0004519 seconds
61127,605: Total time for which application threads were stopped:
0,0004786 seconds
61129,249: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 1291845632 bytes, new threshold 15 (max 15)

  • age 1: 19633616 bytes, 19633616 total
  • age 2: 3428232 bytes, 23061848 total
  • age 3: 1362152 bytes, 24424000 total
  • age 4: 1443728 bytes, 25867728 total
  • age 5: 996840 bytes, 26864568 total
  • age 6: 1584400 bytes, 28448968 total
  • age 7: 1697168 bytes, 30146136 total
  • age 8: 578056 bytes, 30724192 total
  • age 9: 1166056 bytes, 31890248 total
  • age 10: 8904 bytes, 31899152 total
  • age 11: 31640 bytes, 31930792 total
  • age 12: 979168 bytes, 32909960 total
  • age 13: 108016 bytes, 33017976 total
  • age 14: 15376 bytes, 33033352 total
  • age 15: 755696 bytes, 33789048 total
    , 0,0134055 secs]
    [Parallel Time: 8,0 ms, GC Workers: 10]
    [GC Worker Start (ms): Min: 61129248,9, Avg: 61129249,0, Max:
    61129249,0, Diff: 0,1]
    [Ext Root Scanning (ms): Min: 0,4, Avg: 0,7, Max: 2,0, Diff: 1,6,
    Sum: 7,4]
    [Update RS (ms): Min: 0,0, Avg: 0,9, Max: 1,3, Diff: 1,3, Sum: 8,7]
    [Processed Buffers: Min: 0, Avg: 14,6, Max: 38, Diff: 38, Sum:
    146]
    [Scan RS (ms): Min: 0,1, Avg: 0,4, Max: 0,6, Diff: 0,5, Sum: 3,9]
    [Code Root Scanning (ms): Min: 0,0, Avg: 0,0, Max: 0,0, Diff: 0,0,
    Sum: 0,2]
    [Object Copy (ms): Min: 5,7, Avg: 5,7, Max: 5,8, Diff: 0,1, Sum:
    57,5]
    [Termination (ms): Min: 0,0, Avg: 0,1, Max: 0,1, Diff: 0,1, Sum: 0,7]
    [GC Worker Other (ms): Min: 0,0, Avg: 0,0, Max: 0,0, Diff: 0,0, Sum:
    0,2]
    [GC Worker Total (ms): Min: 7,8, Avg: 7,9, Max: 7,9, Diff: 0,1, Sum:
    78,6]
    [GC Worker End (ms): Min: 61129256,8, Avg: 61129256,8, Max:
    61129256,9, Diff: 0,0][Code Root Fixup: 0,1 ms][Code Root Migration: 0,2 ms]
    [Clear CT: 2,0 ms]
    [Other: 3,1 ms]
    [Choose CSet: 0,0 ms]
    [Ref Proc: 1,6 ms]
    [Ref Enq: 0,1 ms]
    [Free CSet: 1,3 ms]
    [Eden: 19,1G(19,1G)->0,0B(19,1G) Survivors: 64,0M->80,0M Heap:
    25,3G(32,0G)->6337,0M(32,0G)]
    [Times: user=0,10 sys=0,01, real=0,01 secs]

I unfortunately don't have the gc logs from when we were using cms, only
entries in the es logs.

[gc][old][420117][9522] duration [46.2s], collections [2]/[47s], total
[46.2s]/[16.8m], memory [31.6gb]->[30.3gb]/[31.9gb], all_pools {[young]
[490.1mb]->[56.6mb]/[665.6mb]}{[survivor] [83.1mb]->[0b]/[83.1mb]}{[old]
[31gb]->[30.3gb]/[31.1gb]}

and this never reclaimed as much space as it does now.

On Thu, Sep 11, 2014 at 9:16 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

Java 8 / G1GC works well here, what issues do you have?

Jörg

On Wed, Sep 10, 2014 at 8:13 PM, Robert Gardam robert.gardam@fyber.com
wrote:

I had been hitting my head up against heap issues until this afternoon
after enabling G1GC.

What are the known issues with this type of GC?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6gum8WWaqsM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAN%3D%2BWfpUvHnv71Wzh33KjejQmuPdr8md90Jn_OMTGdMeAeZ4YQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAN%3D%2BWfpUvHnv71Wzh33KjejQmuPdr8md90Jn_OMTGdMeAeZ4YQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGxG57ysggU%3DUfXhHz7eCgHR0cwhDpEwz7y29sQyMX3Lw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

We also use it in dev for months across various ES and Java 8 releases, I
have been considering rolling it out to a smaller prod cluster as well as
we've had no problems at all.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 11 September 2014 21:30, joergprante@gmail.com joergprante@gmail.com
wrote:

Nice to see another success of G1 GC :slight_smile:

Jörg

On Thu, Sep 11, 2014 at 11:34 AM, Robert Gardam robert.gardam@fyber.com
wrote:

Yeah, we are seeing much better gc performance here too!

We were experiencing the stop the world GC with CMS and then nodes would
time out.

Our heap is 32gb, we run two nodes per system. IO doesn't seem to be an
issue here.
20 nodes = 10 boxes, 128gb ram each

Field cache is limited to 20%
We're bulk indexing around 10k events/s

It seems to be much more stable and predictable in terms of GC. The GC
Logs are showing a huge reductions

61043,866: Total time for which application threads were stopped:
0,0140156 seconds
61044,524: Total time for which application threads were stopped:
0,0005284 seconds
61045,801: Total time for which application threads were stopped:
0,0006138 seconds
61045,802: Total time for which application threads were stopped:
0,0003635 seconds
61045,802: Total time for which application threads were stopped:
0,0002545 seconds
61045,803: Total time for which application threads were stopped:
0,0002944 seconds
61045,804: Total time for which application threads were stopped:
0,0002367 seconds
61046,469: Total time for which application threads were stopped:
0,0004653 seconds
61048,172: Total time for which application threads were stopped:
0,0004850 seconds
61048,598: Total time for which application threads were stopped:
0,0004937 seconds
61049,197: Total time for which application threads were stopped:
0,0004396 seconds
61050,264: Total time for which application threads were stopped:
0,0004587 seconds
61051,593: Total time for which application threads were stopped:
0,0004600 seconds
61051,689: Total time for which application threads were stopped:
0,0005021 seconds
61053,822: Total time for which application threads were stopped:
0,0004721 seconds
61053,824: Total time for which application threads were stopped:
0,0005323 seconds
61053,825: Total time for which application threads were stopped:
0,0003403 seconds
61053,825: Total time for which application threads were stopped:
0,0003301 seconds
61053,826: Total time for which application threads were stopped:
0,0003322 seconds
61053,826: Total time for which application threads were stopped:
0,0003364 seconds
61059,265: Total time for which application threads were stopped:
0,0004321 seconds
61061,691: Total time for which application threads were stopped:
0,0004619 seconds
61062,595: Total time for which application threads were stopped:
0,0004529 seconds
61064,199: Total time for which application threads were stopped:
0,0004587 seconds
61070,267: Total time for which application threads were stopped:
0,0004606 seconds
61074,200: Total time for which application threads were stopped:
0,0004508 seconds
61076,693: Total time for which application threads were stopped:
0,0004709 seconds
61077,597: Total time for which application threads were stopped:
0,0004698 seconds
61079,268: Total time for which application threads were stopped:
0,0004601 seconds
61079,817: Total time for which application threads were stopped:
0,0004535 seconds
61081,818: Total time for which application threads were stopped:
0,0004979 seconds
61082,819: Total time for which application threads were stopped:
0,0004817 seconds
61089,204: Total time for which application threads were stopped:
0,0011584 seconds
61091,699: Total time for which application threads were stopped:
0,0004501 seconds
61092,599: Total time for which application threads were stopped:
0,0004539 seconds
61094,204: Total time for which application threads were stopped:
0,0006452 seconds
61095,271: Total time for which application threads were stopped:
0,0006568 seconds
61101,701: Total time for which application threads were stopped:
0,0004679 seconds
61102,601: Total time for which application threads were stopped:
0,0004576 seconds
61104,272: Total time for which application threads were stopped:
0,0004474 seconds
61114,207: Total time for which application threads were stopped:
0,0005483 seconds
61115,273: Total time for which application threads were stopped:
0,0004848 seconds
61117,604: Total time for which application threads were stopped:
0,0008780 seconds
61117,703: Total time for which application threads were stopped:
0,0005068 seconds
61124,274: Total time for which application threads were stopped:
0,0004519 seconds
61127,605: Total time for which application threads were stopped:
0,0004786 seconds
61129,249: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 1291845632 bytes, new threshold 15 (max 15)

  • age 1: 19633616 bytes, 19633616 total
  • age 2: 3428232 bytes, 23061848 total
  • age 3: 1362152 bytes, 24424000 total
  • age 4: 1443728 bytes, 25867728 total
  • age 5: 996840 bytes, 26864568 total
  • age 6: 1584400 bytes, 28448968 total
  • age 7: 1697168 bytes, 30146136 total
  • age 8: 578056 bytes, 30724192 total
  • age 9: 1166056 bytes, 31890248 total
  • age 10: 8904 bytes, 31899152 total
  • age 11: 31640 bytes, 31930792 total
  • age 12: 979168 bytes, 32909960 total
  • age 13: 108016 bytes, 33017976 total
  • age 14: 15376 bytes, 33033352 total
  • age 15: 755696 bytes, 33789048 total
    , 0,0134055 secs]
    [Parallel Time: 8,0 ms, GC Workers: 10]
    [GC Worker Start (ms): Min: 61129248,9, Avg: 61129249,0, Max:
    61129249,0, Diff: 0,1]
    [Ext Root Scanning (ms): Min: 0,4, Avg: 0,7, Max: 2,0, Diff: 1,6,
    Sum: 7,4]
    [Update RS (ms): Min: 0,0, Avg: 0,9, Max: 1,3, Diff: 1,3, Sum: 8,7]
    [Processed Buffers: Min: 0, Avg: 14,6, Max: 38, Diff: 38, Sum:
    146]
    [Scan RS (ms): Min: 0,1, Avg: 0,4, Max: 0,6, Diff: 0,5, Sum: 3,9]
    [Code Root Scanning (ms): Min: 0,0, Avg: 0,0, Max: 0,0, Diff: 0,0,
    Sum: 0,2]
    [Object Copy (ms): Min: 5,7, Avg: 5,7, Max: 5,8, Diff: 0,1, Sum:
    57,5]
    [Termination (ms): Min: 0,0, Avg: 0,1, Max: 0,1, Diff: 0,1, Sum:
    0,7]
    [GC Worker Other (ms): Min: 0,0, Avg: 0,0, Max: 0,0, Diff: 0,0,
    Sum: 0,2]
    [GC Worker Total (ms): Min: 7,8, Avg: 7,9, Max: 7,9, Diff: 0,1,
    Sum: 78,6]
    [GC Worker End (ms): Min: 61129256,8, Avg: 61129256,8, Max:
    61129256,9, Diff: 0,0][Code Root Fixup: 0,1 ms][Code Root Migration: 0,2 ms]
    [Clear CT: 2,0 ms]
    [Other: 3,1 ms]
    [Choose CSet: 0,0 ms]
    [Ref Proc: 1,6 ms]
    [Ref Enq: 0,1 ms]
    [Free CSet: 1,3 ms]
    [Eden: 19,1G(19,1G)->0,0B(19,1G) Survivors: 64,0M->80,0M Heap:
    25,3G(32,0G)->6337,0M(32,0G)]
    [Times: user=0,10 sys=0,01, real=0,01 secs]

I unfortunately don't have the gc logs from when we were using cms, only
entries in the es logs.

[gc][old][420117][9522] duration [46.2s], collections [2]/[47s], total
[46.2s]/[16.8m], memory [31.6gb]->[30.3gb]/[31.9gb], all_pools {[young]
[490.1mb]->[56.6mb]/[665.6mb]}{[survivor] [83.1mb]->[0b]/[83.1mb]}{[old]
[31gb]->[30.3gb]/[31.1gb]}

and this never reclaimed as much space as it does now.

On Thu, Sep 11, 2014 at 9:16 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

Java 8 / G1GC works well here, what issues do you have?

Jörg

On Wed, Sep 10, 2014 at 8:13 PM, Robert Gardam robert.gardam@fyber.com
wrote:

I had been hitting my head up against heap issues until this afternoon
after enabling G1GC.

What are the known issues with this type of GC?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6gum8WWaqsM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAN%3D%2BWfpUvHnv71Wzh33KjejQmuPdr8md90Jn_OMTGdMeAeZ4YQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAN%3D%2BWfpUvHnv71Wzh33KjejQmuPdr8md90Jn_OMTGdMeAeZ4YQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGxG57ysggU%3DUfXhHz7eCgHR0cwhDpEwz7y29sQyMX3Lw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGxG57ysggU%3DUfXhHz7eCgHR0cwhDpEwz7y29sQyMX3Lw%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Z7HRbjv%2B1K2e%2BfLtd2jAVY_WKB0pAULAsC0fjyZ6c9XQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.