Elasticsearch heavy garbage collection

Hello,

we use a elasticsearch cluster with two nodes together with logstash and
kibana. When we receive a large amount of logs (>200 per sec.)
elasticsearch becomes unusable slow.

We see a lot of garbage collection going on:
[2015-02-25 18:27:02,494][WARN ][monitor.jvm] [server1] [gc][old][564][30]
duration [12.4s], collections [1]/[13s], total [12.4s]/[3.5m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[323.4mb]/[1.1gb]}{[survivor] [65.5mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:14,098][INFO ][monitor.jvm] [server1] [gc][old][566][31]
duration [9.2s], collections [1]/[9.8s], total [9.2s]/[3.7m], memory
[9.7gb]->[8.7gb]/[9.8gb], all_pools {[young]
[1.1gb]->[237.4mb]/[1.1gb]}{[survivor] [65.7mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:27,717][WARN ][monitor.jvm] [server1] [gc][old][568][32]
duration [12s], collections [1]/[12.6s], total [12s]/[3.9m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[275.2mb]/[1.1gb]}{[survivor] [73.1mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:39,518][INFO ][monitor.jvm] [server1] [gc][old][570][33]
duration [9.5s], collections [1]/[9.9s], total [9.5s]/[4.1m], memory
[9.8gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[341mb]/[1.1gb]}{[survivor] [99.7mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:49,549][INFO ][monitor.jvm] [server1] [gc][old][571][34]
duration [9s], collections [1]/[10s], total [9s]/[4.2m], memory
[8.8gb]->[8.8gb]/[9.8gb], all_pools {[young]
[341mb]->[364.5mb]/[1.1gb]}{[survivor] [0b]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:59,113][INFO ][monitor.jvm] [server1] [gc][old][572][35]
duration [8.5s], collections [1]/[9.5s], total [8.5s]/[4.4m], memory
[8.8gb]->[8.9gb]/[9.8gb], all_pools {[young]
[364.5mb]->[379.8mb]/[1.1gb]}{[survivor] [0b]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:08,713][INFO ][monitor.jvm] [server1] [gc][old][574][36]
duration [8.5s], collections [1]/[8.5s], total [8.5s]/[4.5m], memory
[9.8gb]->[8.9gb]/[9.8gb], all_pools {[young]
[1.1gb]->[420.4mb]/[1.1gb]}{[survivor] [146.1mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:18,486][INFO ][monitor.jvm] [server1] [gc][old][576][37]
duration [8.3s], collections [1]/[8.7s], total [8.3s]/[4.6m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[365.2mb]/[1.1gb]}{[survivor] [66.6mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:28,169][INFO ][monitor.jvm] [server1] [gc][old][578][38]
duration [8.3s], collections [1]/[8.6s], total [8.3s]/[4.8m], memory
[9.7gb]->[8.9gb]/[9.8gb], all_pools {[young]
[1.1gb]->[389mb]/[1.1gb]}{[survivor] [88.7mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:38,022][INFO ][monitor.jvm] [server1] [gc][old][580][39]
duration [8.6s], collections [1]/[8.8s], total [8.6s]/[4.9m], memory
[9.7gb]->[8.9gb]/[9.8gb], all_pools {[young]
[1.1gb]->[387.2mb]/[1.1gb]}{[survivor] [79mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:48,061][INFO ][monitor.jvm] [server1] [gc][old][582][40]
duration [8.2s], collections [1]/[9s], total [8.2s]/[5.1m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[325.4mb]/[1.1gb]}{[survivor] [14.6mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}

In htop we see its mainly one core who is busy all the time:

http://i.imgur.com/7LDS8h4.png

Any ideas?

Cheers,
Chris

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b85a0a38-a461-4d14-b188-1a77f5307444%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

when gc duration more than a second, client application will notice the
cluster slow down, perhaps find out what are the hot thread?

and took a heap dump in elasticsearch node

and see what's causing it and start investigate from there?

hth

jason

On Thu, Feb 26, 2015 at 11:32 AM, chris85lang@googlemail.com wrote:

Hello,

we use a elasticsearch cluster with two nodes together with logstash and
kibana. When we receive a large amount of logs (>200 per sec.)
elasticsearch becomes unusable slow.

We see a lot of garbage collection going on:
[2015-02-25 18:27:02,494][WARN ][monitor.jvm] [server1] [gc][old][564][30]
duration [12.4s], collections [1]/[13s], total [12.4s]/[3.5m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[323.4mb]/[1.1gb]}{[survivor] [65.5mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:14,098][INFO ][monitor.jvm] [server1] [gc][old][566][31]
duration [9.2s], collections [1]/[9.8s], total [9.2s]/[3.7m], memory
[9.7gb]->[8.7gb]/[9.8gb], all_pools {[young]
[1.1gb]->[237.4mb]/[1.1gb]}{[survivor] [65.7mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:27,717][WARN ][monitor.jvm] [server1] [gc][old][568][32]
duration [12s], collections [1]/[12.6s], total [12s]/[3.9m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[275.2mb]/[1.1gb]}{[survivor] [73.1mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:39,518][INFO ][monitor.jvm] [server1] [gc][old][570][33]
duration [9.5s], collections [1]/[9.9s], total [9.5s]/[4.1m], memory
[9.8gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[341mb]/[1.1gb]}{[survivor] [99.7mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:49,549][INFO ][monitor.jvm] [server1] [gc][old][571][34]
duration [9s], collections [1]/[10s], total [9s]/[4.2m], memory
[8.8gb]->[8.8gb]/[9.8gb], all_pools {[young]
[341mb]->[364.5mb]/[1.1gb]}{[survivor] [0b]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:27:59,113][INFO ][monitor.jvm] [server1] [gc][old][572][35]
duration [8.5s], collections [1]/[9.5s], total [8.5s]/[4.4m], memory
[8.8gb]->[8.9gb]/[9.8gb], all_pools {[young]
[364.5mb]->[379.8mb]/[1.1gb]}{[survivor] [0b]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:08,713][INFO ][monitor.jvm] [server1] [gc][old][574][36]
duration [8.5s], collections [1]/[8.5s], total [8.5s]/[4.5m], memory
[9.8gb]->[8.9gb]/[9.8gb], all_pools {[young]
[1.1gb]->[420.4mb]/[1.1gb]}{[survivor] [146.1mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:18,486][INFO ][monitor.jvm] [server1] [gc][old][576][37]
duration [8.3s], collections [1]/[8.7s], total [8.3s]/[4.6m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[365.2mb]/[1.1gb]}{[survivor] [66.6mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:28,169][INFO ][monitor.jvm] [server1] [gc][old][578][38]
duration [8.3s], collections [1]/[8.6s], total [8.3s]/[4.8m], memory
[9.7gb]->[8.9gb]/[9.8gb], all_pools {[young]
[1.1gb]->[389mb]/[1.1gb]}{[survivor] [88.7mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:38,022][INFO ][monitor.jvm] [server1] [gc][old][580][39]
duration [8.6s], collections [1]/[8.8s], total [8.6s]/[4.9m], memory
[9.7gb]->[8.9gb]/[9.8gb], all_pools {[young]
[1.1gb]->[387.2mb]/[1.1gb]}{[survivor] [79mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}
[2015-02-25 18:28:48,061][INFO ][monitor.jvm] [server1] [gc][old][582][40]
duration [8.2s], collections [1]/[9s], total [8.2s]/[5.1m], memory
[9.7gb]->[8.8gb]/[9.8gb], all_pools {[young]
[1.1gb]->[325.4mb]/[1.1gb]}{[survivor] [14.6mb]->[0b]/[149.7mb]}{[old]
[8.5gb]->[8.5gb]/[8.5gb]}

In htop we see its mainly one core who is busy all the time:

http://i.imgur.com/7LDS8h4.png

Any ideas?

Cheers,
Chris

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/b85a0a38-a461-4d14-b188-1a77f5307444%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/b85a0a38-a461-4d14-b188-1a77f5307444%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHO4itxFR%3DN0TFFmC0vX-63r_YSq3-orYu94%2BZ2GTxStVn%2Bm0w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.