Help with GC configuration

Hello, we are experimenting some problems with GC configuration of
ElasticSearch,

we have elasticsearch 0.20.2 running in java 6.0.32 with 9 node with 32 gb
and 16 gb assigned to the elastic, in this cluster, we have one index with
20 shard and 2 replicas, we have around 15k of request per minute.

After a few hours, at least one node is expelled of the cluster and the
logs that we see are:

[2013-08-26 20:33:36,030][INFO ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16432][221] duration [18.9s], collections
[2]/[2.2m], total [18.9s]/[1.2m], memory [6.8gb]->[6.6gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[91.7mb]->[611.4kb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.6gb]/[6.8gb]}{[CMS Perm
Gen] [42.6mb]->[42.2mb]/[82mb]}

[2013-08-26 20:36:11,033][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16439][223] duration [31.3s], collections
[2]/[2.4m], total [31.3s]/[1.7m], memory [6.8gb]->[6.7gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[68.8mb]->[32mb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.7gb]/[6.8gb]}{[CMS Perm
Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:38:54,467][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16441][225] duration [37.8s], collections
[2]/[2.7m], total [37.8s]/[2.4m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[83.5mb]->[1mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.7gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:41:36,086][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16442][227] duration [42.1s], collections
[2]/[2.6m], total [42.1s]/[3.1m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[1mb]->[9.8mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:44:23,710][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16445][229] duration [40.1s], collections
[2]/[2.7m], total [40.1s]/[3.7m], memory [6.9gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[115.4mb]->[56.4mb]/[133.1mb]}{[Par Survivor Space]
[0b]->[0b]/[16.6mb]}{[CMS Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen]
[42.2mb]->[42.2mb]/[82mb]}

I don't know how we can resolve this problem, we try diferents
configurations in the in the elasticsearch.in.sh but the problem persist.

would be great if someone could help with this.

regards

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

How many cores do you have? Do you see a CPU spike during gc?

Can you first test with 12 gb heap?

Also, I suggest increasing your Eden space to 2 gb

After that enable gc logging and upload new information.

Sent from my iPhone

On Aug 26, 2013, at 6:45 PM, Ariel L ariel2129@gmail.com wrote:

Hello, we are experimenting some problems with GC configuration of Elasticsearch,

we have elasticsearch 0.20.2 running in java 6.0.32 with 9 node with 32 gb and 16 gb assigned to the elastic, in this cluster, we have one index with 20 shard and 2 replicas, we have around 15k of request per minute.

After a few hours, at least one node is expelled of the cluster and the logs that we see are:

[2013-08-26 20:33:36,030][INFO ][monitor.jvm ] [King Bedlam] [gc][ConcurrentMarkSweep][16432][221] duration [18.9s], collections [2]/[2.2m], total [18.9s]/[1.2m], memory [6.8gb]->[6.6gb]/[6.9gb], all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space] [91.7mb]->[611.4kb]/[133.1mb]}{[Par Survivor Space] [16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.6gb]/[6.8gb]}{[CMS Perm Gen] [42.6mb]->[42.2mb]/[82mb]}

[2013-08-26 20:36:11,033][WARN ][monitor.jvm ] [King Bedlam] [gc][ConcurrentMarkSweep][16439][223] duration [31.3s], collections [2]/[2.4m], total [31.3s]/[1.7m], memory [6.8gb]->[6.7gb]/[6.9gb], all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space] [68.8mb]->[32mb]/[133.1mb]}{[Par Survivor Space] [16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.7gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:38:54,467][WARN ][monitor.jvm ] [King Bedlam] [gc][ConcurrentMarkSweep][16441][225] duration [37.8s], collections [2]/[2.7m], total [37.8s]/[2.4m], memory [6.8gb]->[6.8gb]/[6.9gb], all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space] [83.5mb]->[1mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:41:36,086][WARN ][monitor.jvm ] [King Bedlam] [gc][ConcurrentMarkSweep][16442][227] duration [42.1s], collections [2]/[2.6m], total [42.1s]/[3.1m], memory [6.8gb]->[6.8gb]/[6.9gb], all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space] [1mb]->[9.8mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:44:23,710][WARN ][monitor.jvm ] [King Bedlam] [gc][ConcurrentMarkSweep][16445][229] duration [40.1s], collections [2]/[2.7m], total [40.1s]/[3.7m], memory [6.9gb]->[6.8gb]/[6.9gb], all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space] [115.4mb]->[56.4mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

I don't know how we can resolve this problem, we try diferents configurations in the in the elasticsearch.in.sh but the problem persist.

would be great if someone could help with this.
regards

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

In addition to the ConcurrentMarkSweep, consider tuning these options also:
-XX:CMSInitiatingOccupancyFraction=30
-XX:+UseCMSInitiatingOccupancyOnly
-XX:NewRatio=4

The actual values here may vary, but this is a configuration working well
at some of out customers.
Also, I dont know what difference there are between v.0.19.10 and 0.20.2 vs
0.20.6, but we had issues in 0.19.10 with long GC that worked far better in
0.20.6.

On Tue, Aug 27, 2013 at 4:12 AM, Mohit Anchlia mohitanchlia@gmail.comwrote:

How many cores do you have? Do you see a CPU spike during gc?

Can you first test with 12 gb heap?

Also, I suggest increasing your Eden space to 2 gb

After that enable gc logging and upload new information.

Sent from my iPhone

On Aug 26, 2013, at 6:45 PM, Ariel L ariel2129@gmail.com wrote:

Hello, we are experimenting some problems with GC configuration of
Elasticsearch,

we have elasticsearch 0.20.2 running in java 6.0.32 with 9 node with 32 gb
and 16 gb assigned to the elastic, in this cluster, we have one index with
20 shard and 2 replicas, we have around 15k of request per minute.

After a few hours, at least one node is expelled of the cluster and the
logs that we see are:

[2013-08-26 20:33:36,030][INFO ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16432][221] duration [18.9s], collections
[2]/[2.2m], total [18.9s]/[1.2m], memory [6.8gb]->[6.6gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[91.7mb]->[611.4kb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.6gb]/[6.8gb]}{[CMS Perm
Gen] [42.6mb]->[42.2mb]/[82mb]}

[2013-08-26 20:36:11,033][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16439][223] duration [31.3s], collections
[2]/[2.4m], total [31.3s]/[1.7m], memory [6.8gb]->[6.7gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[68.8mb]->[32mb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.7gb]/[6.8gb]}{[CMS Perm
Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:38:54,467][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16441][225] duration [37.8s], collections
[2]/[2.7m], total [37.8s]/[2.4m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[83.5mb]->[1mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.7gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:41:36,086][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16442][227] duration [42.1s], collections
[2]/[2.6m], total [42.1s]/[3.1m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[1mb]->[9.8mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:44:23,710][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16445][229] duration [40.1s], collections
[2]/[2.7m], total [40.1s]/[3.7m], memory [6.9gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[115.4mb]->[56.4mb]/[133.1mb]}{[Par Survivor Space]
[0b]->[0b]/[16.6mb]}{[CMS Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen]
[42.2mb]->[42.2mb]/[82mb]}

I don't know how we can resolve this problem, we try diferents
configurations in the in the elasticsearch.in.sh but the problem persist.

would be great if someone could help with this.

regards

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
mvh

Runar Myklebust

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I am not want to upgrade my software stack regularly, but the memory
improvements in elasticsearch 0.90 are excellent. I would spend time trying
to upgrade elasticsearch instead of fine-tuning the JVM or upgrading to
Java 7. Yes, they should still be tuned, but the benefits are not as great
as upgrading elasticsearch.

Cheers,

Ivan

On Tue, Aug 27, 2013 at 3:18 AM, Runar Myklebust runar.a.m@gmail.comwrote:

In addition to the ConcurrentMarkSweep, consider tuning these options
also:
-XX:CMSInitiatingOccupancyFraction=30
-XX:+UseCMSInitiatingOccupancyOnly
-XX:NewRatio=4

The actual values here may vary, but this is a configuration working well
at some of out customers.
Also, I dont know what difference there are between v.0.19.10 and 0.20.2
vs 0.20.6, but we had issues in 0.19.10 with long GC that worked far better
in 0.20.6.

On Tue, Aug 27, 2013 at 4:12 AM, Mohit Anchlia mohitanchlia@gmail.comwrote:

How many cores do you have? Do you see a CPU spike during gc?

Can you first test with 12 gb heap?

Also, I suggest increasing your Eden space to 2 gb

After that enable gc logging and upload new information.

Sent from my iPhone

On Aug 26, 2013, at 6:45 PM, Ariel L ariel2129@gmail.com wrote:

Hello, we are experimenting some problems with GC configuration of
Elasticsearch,

we have elasticsearch 0.20.2 running in java 6.0.32 with 9 node with 32
gb and 16 gb assigned to the elastic, in this cluster, we have one index
with 20 shard and 2 replicas, we have around 15k of request per minute.

After a few hours, at least one node is expelled of the cluster and the
logs that we see are:

[2013-08-26 20:33:36,030][INFO ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16432][221] duration [18.9s], collections
[2]/[2.2m], total [18.9s]/[1.2m], memory [6.8gb]->[6.6gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[91.7mb]->[611.4kb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.6gb]/[6.8gb]}{[CMS Perm
Gen] [42.6mb]->[42.2mb]/[82mb]}

[2013-08-26 20:36:11,033][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16439][223] duration [31.3s], collections
[2]/[2.4m], total [31.3s]/[1.7m], memory [6.8gb]->[6.7gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[68.8mb]->[32mb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.7gb]/[6.8gb]}{[CMS Perm
Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:38:54,467][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16441][225] duration [37.8s], collections
[2]/[2.7m], total [37.8s]/[2.4m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[83.5mb]->[1mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.7gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:41:36,086][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16442][227] duration [42.1s], collections
[2]/[2.6m], total [42.1s]/[3.1m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[1mb]->[9.8mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:44:23,710][WARN ][monitor.jvm ] [King Bedlam]
[gc][ConcurrentMarkSweep][16445][229] duration [40.1s], collections
[2]/[2.7m], total [40.1s]/[3.7m], memory [6.9gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[115.4mb]->[56.4mb]/[133.1mb]}{[Par Survivor Space]
[0b]->[0b]/[16.6mb]}{[CMS Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen]
[42.2mb]->[42.2mb]/[82mb]}

I don't know how we can resolve this problem, we try diferents
configurations in the in the elasticsearch.in.sh but the problem
persist.

would be great if someone could help with this.

regards

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
mvh

Runar Myklebust

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Take a look at this
Webinarhttp://www.elasticsearch.org/webinars/whats-new-with-elasticsearch-0-90/in
its entirety:

I don't think you will need much convincing after you thoroughly understand
all the changes that have been made in Elasticsearch to leverage the
mind-blowing performance improvements that comes with the new version of
Lucene (the underlying Information Retrieval library that powers
Elasticsearch).

Author and Instructor for the Upcoming Book and Lecture Series
Massive Log Data Aggregation, Processing, Searching and Visualization with
Open Source Software

http://massivelogdata.com

On 27 August 2013 14:11, Ivan Brusic ivan@brusic.com wrote:

I am not want to upgrade my software stack regularly, but the memory
improvements in elasticsearch 0.90 are excellent. I would spend time trying
to upgrade elasticsearch instead of fine-tuning the JVM or upgrading to
Java 7. Yes, they should still be tuned, but the benefits are not as great
as upgrading elasticsearch.

Cheers,

Ivan

On Tue, Aug 27, 2013 at 3:18 AM, Runar Myklebust runar.a.m@gmail.comwrote:

In addition to the ConcurrentMarkSweep, consider tuning these options
also:
-XX:CMSInitiatingOccupancyFraction=30
-XX:+UseCMSInitiatingOccupancyOnly
-XX:NewRatio=4

The actual values here may vary, but this is a configuration working well
at some of out customers.
Also, I dont know what difference there are between v.0.19.10 and 0.20.2
vs 0.20.6, but we had issues in 0.19.10 with long GC that worked far better
in 0.20.6.

On Tue, Aug 27, 2013 at 4:12 AM, Mohit Anchlia mohitanchlia@gmail.comwrote:

How many cores do you have? Do you see a CPU spike during gc?

Can you first test with 12 gb heap?

Also, I suggest increasing your Eden space to 2 gb

After that enable gc logging and upload new information.

Sent from my iPhone

On Aug 26, 2013, at 6:45 PM, Ariel L ariel2129@gmail.com wrote:

Hello, we are experimenting some problems with GC configuration of
Elasticsearch,

we have elasticsearch 0.20.2 running in java 6.0.32 with 9 node with 32
gb and 16 gb assigned to the elastic, in this cluster, we have one index
with 20 shard and 2 replicas, we have around 15k of request per minute.

After a few hours, at least one node is expelled of the cluster and the
logs that we see are:

[2013-08-26 20:33:36,030][INFO ][monitor.jvm ] [King
Bedlam] [gc][ConcurrentMarkSweep][16432][221] duration [18.9s], collections
[2]/[2.2m], total [18.9s]/[1.2m], memory [6.8gb]->[6.6gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[91.7mb]->[611.4kb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.6gb]/[6.8gb]}{[CMS Perm
Gen] [42.6mb]->[42.2mb]/[82mb]}

[2013-08-26 20:36:11,033][WARN ][monitor.jvm ] [King
Bedlam] [gc][ConcurrentMarkSweep][16439][223] duration [31.3s], collections
[2]/[2.4m], total [31.3s]/[1.7m], memory [6.8gb]->[6.7gb]/[6.9gb],
all_pools {[Code Cache] [5.9mb]->[5.9mb]/[48mb]}{[Par Eden Space]
[68.8mb]->[32mb]/[133.1mb]}{[Par Survivor Space]
[16.6mb]->[0b]/[16.6mb]}{[CMS Old Gen] [6.7gb]->[6.7gb]/[6.8gb]}{[CMS Perm
Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:38:54,467][WARN ][monitor.jvm ] [King
Bedlam] [gc][ConcurrentMarkSweep][16441][225] duration [37.8s], collections
[2]/[2.7m], total [37.8s]/[2.4m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[83.5mb]->[1mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.7gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:41:36,086][WARN ][monitor.jvm ] [King
Bedlam] [gc][ConcurrentMarkSweep][16442][227] duration [42.1s], collections
[2]/[2.6m], total [42.1s]/[3.1m], memory [6.8gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[1mb]->[9.8mb]/[133.1mb]}{[Par Survivor Space] [0b]->[0b]/[16.6mb]}{[CMS
Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen] [42.2mb]->[42.2mb]/[82mb]}

[2013-08-26 20:44:23,710][WARN ][monitor.jvm ] [King
Bedlam] [gc][ConcurrentMarkSweep][16445][229] duration [40.1s], collections
[2]/[2.7m], total [40.1s]/[3.7m], memory [6.9gb]->[6.8gb]/[6.9gb],
all_pools {[Code Cache] [6mb]->[6mb]/[48mb]}{[Par Eden Space]
[115.4mb]->[56.4mb]/[133.1mb]}{[Par Survivor Space]
[0b]->[0b]/[16.6mb]}{[CMS Old Gen] [6.8gb]->[6.8gb]/[6.8gb]}{[CMS Perm Gen]
[42.2mb]->[42.2mb]/[82mb]}

I don't know how we can resolve this problem, we try diferents
configurations in the in the elasticsearch.in.sh but the problem
persist.

would be great if someone could help with this.

regards

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
mvh

Runar Myklebust

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.