High cpu load but low memory usage

Hi Pros,

I've used ES for several months, it works perfectly and speed as lightning.

There are 3 nodes in cluster, each have 12cores CPU and 24GB-32GB RAM.

https://lh3.googleusercontent.com/-_uMqos5lNrA/VEoN2IA030I/AAAAAAAAE7Q/P-dzKwCA5Uo/s1600/es1.png

For some recent days, the cpu get too high on all three nodes.

https://lh4.googleusercontent.com/-vZ2nh9KaL5I/VEoOdnnLnpI/AAAAAAAAE7Y/hhKD9quUW04/s1600/es3.png

Here is some sample record :

https://lh4.googleusercontent.com/-CB5yFYk54i4/VEoOzF4RnDI/AAAAAAAAE7g/-Ko82c2hiC8/s1600/es4.png

The question is :

  • Why the cpu get too high but memory consume is low althougt I have set
    the big HEAP size.

  • There is 15 shards per index, is this too much or enough ? I've used the
    default config. I know that this could be effect the load but dont know how
    to figure out the exact number.

  • Is there any way to show the running queries ? something like mysql show
    process list ? to show what queries have eat CPU alot. I have enable slow
    log queries >1s but found nothing.

  • Any suggestion is appreciate.

If you need more info, plz tell me.

Thank you so much.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/11addd59-1138-4f62-bac2-3e95030f5631%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dear Kimchy,

Plz suggest me what the problem could be ?

Thank you so much !

Atrus.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/73133190-0172-4c61-99b0-a886a2cd5cc0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Please post the result of "hot threads" action on a gist/paste page so we
can understand your problem better.

Jörg

On Fri, Oct 24, 2014 at 10:43 AM, Atrus anhhuyla@gmail.com wrote:

Hi Pros,

I've used ES for several months, it works perfectly and speed as lightning.

There are 3 nodes in cluster, each have 12cores CPU and 24GB-32GB RAM.

https://lh3.googleusercontent.com/-_uMqos5lNrA/VEoN2IA030I/AAAAAAAAE7Q/P-dzKwCA5Uo/s1600/es1.png

For some recent days, the cpu get too high on all three nodes.

https://lh4.googleusercontent.com/-vZ2nh9KaL5I/VEoOdnnLnpI/AAAAAAAAE7Y/hhKD9quUW04/s1600/es3.png

Here is some sample record :

https://lh4.googleusercontent.com/-CB5yFYk54i4/VEoOzF4RnDI/AAAAAAAAE7g/-Ko82c2hiC8/s1600/es4.png

The question is :

  • Why the cpu get too high but memory consume is low althougt I have set
    the big HEAP size.

  • There is 15 shards per index, is this too much or enough ? I've used the
    default config. I know that this could be effect the load but dont know how
    to figure out the exact number.

  • Is there any way to show the running queries ? something like mysql show
    process list ? to show what queries have eat CPU alot. I have enable slow
    log queries >1s but found nothing.

  • Any suggestion is appreciate.

If you need more info, plz tell me.

Thank you so much.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/11addd59-1138-4f62-bac2-3e95030f5631%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/11addd59-1138-4f62-bac2-3e95030f5631%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEepovw3E_B%2B3vGSB%2B9nB-%2B2-RrQMVYPJRmp1PKpjkHdQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Le vendredi 24 octobre 2014 10:43:21 UTC+2, Atrus a écrit :

  • There is 15 shards per index, is this too much or enough ? I've used the
    default config. I know that this could be effect the load but dont know how
    to figure out the exact number.

It's a huge value. Shards can be split between nodes, do you target tu use
15 nodes?

  • Is there any way to show the running queries ? something like mysql show
    process list ? to show what queries have eat CPU alot. I have enable slow
    log queries >1s but found nothing.

You can watch HTTP traffic, with pcap (I hack packetbeat, for that). It's
from the outside, from the inside, use the hot thread. strace can help, too.

  • Any suggestion is appreciate.

Do you poll the _nodes/stat url? a monitoring tool, or a web page like kopf?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3fbc22df-c8d8-4472-8a1a-db90a130d795%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thanks Mathieu for you response. I will try your suggestions.

"It's a huge value. Shards can be split between nodes, do you target tu use
15 nodes?"

Hi Mat,

  • For examples if I have just one node, shards = 5, replica = 0. Then I can
    easily backup the data by "cp /var/lib/elasticsearch/nodename
    /somewhere/backup -rfp"

  • Now I add one more node, so the cluster has two nodes, shards = 5,
    replica = 0. The shards are redistributed, maybe 1st node holds 0 2 4, 2nd
    node holds 1 3. => How can I backup, each node does not hold the whole
    data, can not simple cp ...

  • If I update replicas = 1, each node now have full 5 shards, I can easy cp
    backup on any node.

If you know the better way for backup which can handle distributed shards,
plz let me know.

Thank you.

PS : Can I reduce shards from 5 to 4 without losing data ?

On Sunday, October 26, 2014 12:57:40 AM UTC+7, Mathieu Lecarme wrote:

Le vendredi 24 octobre 2014 10:43:21 UTC+2, Atrus a écrit :

  • There is 15 shards per index, is this too much or enough ? I've used
    the default config. I know that this could be effect the load but dont know
    how to figure out the exact number.

It's a huge value. Shards can be split between nodes, do you target tu
use 15 nodes?

  • Is there any way to show the running queries ? something like mysql
    show process list ? to show what queries have eat CPU alot. I have enable
    slow log queries >1s but found nothing.

You can watch HTTP traffic, with pcap (I hack packetbeat, for that). It's
from the outside, from the inside, use the hot thread. strace can help, too.

  • Any suggestion is appreciate.

Do you poll the _nodes/stat url? a monitoring tool, or a web page like
kopf?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4ab77f45-177d-4546-b953-5f38c7f4f5d1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

For backup/restore, do not use cp. There is snapshot/restore for that. It
works on primary shards only.

You can not reduce shards in an existing index. Use export/import tools and
create new index.

Jörg

On Sun, Oct 26, 2014 at 3:44 AM, Atrus anhhuyla@gmail.com wrote:

Thanks Mathieu for you response. I will try your suggestions.

"It's a huge value. Shards can be split between nodes, do you target tu
use 15 nodes?"

Hi Mat,

  • For examples if I have just one node, shards = 5, replica = 0. Then I
    can easily backup the data by "cp /var/lib/elasticsearch/nodename
    /somewhere/backup -rfp"

  • Now I add one more node, so the cluster has two nodes, shards = 5,
    replica = 0. The shards are redistributed, maybe 1st node holds 0 2 4, 2nd
    node holds 1 3. => How can I backup, each node does not hold the whole
    data, can not simple cp ...

  • If I update replicas = 1, each node now have full 5 shards, I can easy
    cp backup on any node.

If you know the better way for backup which can handle distributed shards,
plz let me know.

Thank you.

PS : Can I reduce shards from 5 to 4 without losing data ?

On Sunday, October 26, 2014 12:57:40 AM UTC+7, Mathieu Lecarme wrote:

Le vendredi 24 octobre 2014 10:43:21 UTC+2, Atrus a écrit :

  • There is 15 shards per index, is this too much or enough ? I've used
    the default config. I know that this could be effect the load but dont know
    how to figure out the exact number.

It's a huge value. Shards can be split between nodes, do you target tu
use 15 nodes?

  • Is there any way to show the running queries ? something like mysql
    show process list ? to show what queries have eat CPU alot. I have enable
    slow log queries >1s but found nothing.

You can watch HTTP traffic, with pcap (I hack packetbeat, for that). It's
from the outside, from the inside, use the hot thread. strace can help, too.

  • Any suggestion is appreciate.

Do you poll the _nodes/stat url? a monitoring tool, or a web page like
kopf?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4ab77f45-177d-4546-b953-5f38c7f4f5d1%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/4ab77f45-177d-4546-b953-5f38c7f4f5d1%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEAjqyXotjzP3STJcX2vMzjwBtjiWerOe0rF0wOtk1rsg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thanks Jorg.

"Use export/import tools and create new index." Such as ?

Could you recommend me ?

Thanks so much.

BRs.

On Sunday, October 26, 2014 5:08:33 PM UTC+7, Jörg Prante wrote:

For backup/restore, do not use cp. There is snapshot/restore for that. It
works on primary shards only.

Elasticsearch Platform — Find real-time answers at scale | Elastic

You can not reduce shards in an existing index. Use export/import tools
and create new index.

Jörg

On Sun, Oct 26, 2014 at 3:44 AM, Atrus <anhh...@gmail.com <javascript:>>
wrote:

Thanks Mathieu for you response. I will try your suggestions.

"It's a huge value. Shards can be split between nodes, do you target tu
use 15 nodes?"

Hi Mat,

  • For examples if I have just one node, shards = 5, replica = 0. Then I
    can easily backup the data by "cp /var/lib/elasticsearch/nodename
    /somewhere/backup -rfp"

  • Now I add one more node, so the cluster has two nodes, shards = 5,
    replica = 0. The shards are redistributed, maybe 1st node holds 0 2 4, 2nd
    node holds 1 3. => How can I backup, each node does not hold the whole
    data, can not simple cp ...

  • If I update replicas = 1, each node now have full 5 shards, I can easy
    cp backup on any node.

If you know the better way for backup which can handle distributed
shards, plz let me know.

Thank you.

PS : Can I reduce shards from 5 to 4 without losing data ?

On Sunday, October 26, 2014 12:57:40 AM UTC+7, Mathieu Lecarme wrote:

Le vendredi 24 octobre 2014 10:43:21 UTC+2, Atrus a écrit :

  • There is 15 shards per index, is this too much or enough ? I've used
    the default config. I know that this could be effect the load but dont know
    how to figure out the exact number.

It's a huge value. Shards can be split between nodes, do you target tu
use 15 nodes?

  • Is there any way to show the running queries ? something like mysql
    show process list ? to show what queries have eat CPU alot. I have enable
    slow log queries >1s but found nothing.

You can watch HTTP traffic, with pcap (I hack packetbeat, for that).
It's from the outside, from the inside, use the hot thread. strace can
help, too.

  • Any suggestion is appreciate.

Do you poll the _nodes/stat url? a monitoring tool, or a web page like
kopf?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4ab77f45-177d-4546-b953-5f38c7f4f5d1%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/4ab77f45-177d-4546-b953-5f38c7f4f5d1%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f234358a-ae82-4748-8717-65b2ec420c5c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I have written a plugin for that

maybe it fits your requirements.

Jörg

On Sun, Oct 26, 2014 at 11:21 AM, Atrus anhhuyla@gmail.com wrote:

Thanks Jorg.

"Use export/import tools and create new index." Such as ?

Could you recommend me ?

Thanks so much.

BRs.

On Sunday, October 26, 2014 5:08:33 PM UTC+7, Jörg Prante wrote:

For backup/restore, do not use cp. There is snapshot/restore for that. It
works on primary shards only.

Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-snapshots.html

You can not reduce shards in an existing index. Use export/import tools
and create new index.

Jörg

On Sun, Oct 26, 2014 at 3:44 AM, Atrus anhh...@gmail.com wrote:

Thanks Mathieu for you response. I will try your suggestions.

"It's a huge value. Shards can be split between nodes, do you target tu
use 15 nodes?"

Hi Mat,

  • For examples if I have just one node, shards = 5, replica = 0. Then I
    can easily backup the data by "cp /var/lib/elasticsearch/nodename
    /somewhere/backup -rfp"

  • Now I add one more node, so the cluster has two nodes, shards = 5,
    replica = 0. The shards are redistributed, maybe 1st node holds 0 2 4, 2nd
    node holds 1 3. => How can I backup, each node does not hold the whole
    data, can not simple cp ...

  • If I update replicas = 1, each node now have full 5 shards, I can easy
    cp backup on any node.

If you know the better way for backup which can handle distributed
shards, plz let me know.

Thank you.

PS : Can I reduce shards from 5 to 4 without losing data ?

On Sunday, October 26, 2014 12:57:40 AM UTC+7, Mathieu Lecarme wrote:

Le vendredi 24 octobre 2014 10:43:21 UTC+2, Atrus a écrit :

  • There is 15 shards per index, is this too much or enough ? I've used
    the default config. I know that this could be effect the load but dont know
    how to figure out the exact number.

It's a huge value. Shards can be split between nodes, do you target tu
use 15 nodes?

  • Is there any way to show the running queries ? something like mysql
    show process list ? to show what queries have eat CPU alot. I have enable
    slow log queries >1s but found nothing.

You can watch HTTP traffic, with pcap (I hack packetbeat, for that).
It's from the outside, from the inside, use the hot thread. strace can
help, too.

  • Any suggestion is appreciate.

Do you poll the _nodes/stat url? a monitoring tool, or a web page like
kopf?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/4ab77f45-177d-4546-b953-5f38c7f4f5d1%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/4ab77f45-177d-4546-b953-5f38c7f4f5d1%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f234358a-ae82-4748-8717-65b2ec420c5c%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/f234358a-ae82-4748-8717-65b2ec420c5c%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGtnN2uuc2yEc5fn%2BBuRM%3DEjLdh5SWnvHGJLbWa%2BkAgJQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi Jorg,

This is hot thread info when CPU get high :

::: [Search-195][W2LL0dnBSGu_5k7fAHt0uA][inet[/195:9300]]{master=true}

28.1% (140.4ms out of 500ms) cpu usage by thread
'elasticsearch[Search-195][search][T#22]'

10/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

24.7% (123.4ms out of 500ms) cpu usage by thread
'elasticsearch[Search-195][search][T#9]'

10/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

24.0% (119.8ms out of 500ms) cpu usage by thread
'elasticsearch[Search-195][search][T#19]'

7/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

3/10 snapshots sharing following 23 elements

  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)

  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)

  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)

  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)

  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:54)

  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)

  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)

  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:581)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:533)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:510)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:345)

  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:115)

  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)

  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

::: [Search-240][r3ykBu_4QOGwmJRlbbhnsg][inet[/.240:9300]]{master=true}

33.1% (165.2ms out of 500ms) cpu usage by thread
'elasticsearch[Search-240][search][T#24]'

8/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

2/10 snapshots sharing following 23 elements

  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)

  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)

  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)

  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)

  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:54)

  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)

  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)

  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:581)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:533)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:510)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:345)

  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:115)

  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)

  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

31.2% (156.1ms out of 500ms) cpu usage by thread
'elasticsearch[Search-240][search][T#4]'

10/10 snapshots sharing following 3 elements

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

31.1% (155.6ms out of 500ms) cpu usage by thread
'elasticsearch[Search-240][search][T#6]'

7/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

3/10 snapshots sharing following 25 elements

  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)

  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)

  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)

  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)

  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:50)

  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)

  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)

  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)

  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)

  org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)

  org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)

  org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)

  org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)

  org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

::: [Search-198][lWVoNoSKQdWZ_TrrpjZX5Q][inet[/198:9300]]{master=true}

27.6% (138.2ms out of 500ms) cpu usage by thread
'elasticsearch[Search-198][search][T#14]'

2/10 snapshots sharing following 14 elements

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)

  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)

  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)

  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

8/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

25.6% (127.8ms out of 500ms) cpu usage by thread
'elasticsearch[Search-198][search][T#21]'

6/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

2/10 snapshots sharing following 24 elements

  org.elasticsearch.index.search.child.ParentConstantScoreQuery$ChildrenWeight$ChildrenDocIdIterator.match(ParentConstantScoreQuery.java:176)

  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)

  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)

  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)

  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)

  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:50)

  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)

  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)

  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)

  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)

  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

2/10 snapshots sharing following 22 elements

  org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:111)

  org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:142)

  org.apache.lucene.search.BooleanQuery$BooleanWeight.scorer(BooleanQuery.java:311)

  org.apache.lucene.search.FilteredQuery$QueryFirstFilterStrategy.filteredScorer(FilteredQuery.java:612)

  org.elasticsearch.common.lucene.search.XFilteredQuery$CustomRandomAccessFilterStrategy.filteredScorer(XFilteredQuery.java:229)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)

  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)

  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)

  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)

  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)

  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

24.4% (122ms out of 500ms) cpu usage by thread
'elasticsearch[Search-198][search][T#10]'

10/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)

  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)

  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

I dont know what the problem is.

Plz give some suggests.

Thanks & BRs.

On Saturday, October 25, 2014 7:43:50 PM UTC+7, Jörg Prante wrote:

Please post the result of "hot threads" action on a gist/paste page so we
can understand your problem better.

Elasticsearch Platform — Find real-time answers at scale | Elastic

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d760d84b-f2b9-49d3-99de-6dc7bb95ca1d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Did your search queries change recently?

You have some options

  • optimize indices to reduce segments, therefore faster search

  • optimize queries, use filter/constant score instead of query/score

  • use caching for filtered queries if you have queries that repeat

Jörg

On Tue, Oct 28, 2014 at 11:02 AM, Anh Huy Do anhhuyla@gmail.com wrote:

Hi Jorg,

This is hot thread info when CPU get high :

::: [Search-195][W2LL0dnBSGu_5k7fAHt0uA][inet[/195:9300]]{master=true}

28.1% (140.4ms out of 500ms) cpu usage by thread
'elasticsearch[Search-195][search][T#22]'

10/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

24.7% (123.4ms out of 500ms) cpu usage by thread
'elasticsearch[Search-195][search][T#9]'

10/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

24.0% (119.8ms out of 500ms) cpu usage by thread
'elasticsearch[Search-195][search][T#19]'

7/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

3/10 snapshots sharing following 23 elements


  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)


  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)


  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)


  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)


  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:54)


  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)


  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)


  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:581)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:533)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:510)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:345)


  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:115)


  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)


  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

::: [Search-240][r3ykBu_4QOGwmJRlbbhnsg][inet[/.240:9300]]{master=true}

33.1% (165.2ms out of 500ms) cpu usage by thread
'elasticsearch[Search-240][search][T#24]'

8/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

2/10 snapshots sharing following 23 elements


  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)


  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)


  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)


  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)


  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:54)


  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)


  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)


  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:581)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:533)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:510)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:345)


  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:115)


  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)


  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

31.2% (156.1ms out of 500ms) cpu usage by thread
'elasticsearch[Search-240][search][T#4]'

10/10 snapshots sharing following 3 elements

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

31.1% (155.6ms out of 500ms) cpu usage by thread
'elasticsearch[Search-240][search][T#6]'

7/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

3/10 snapshots sharing following 25 elements


  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)


  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)


  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)


  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)


  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:50)


  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)


  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)


  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)


  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)


  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)


  org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)


  org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)


  org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)


  org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)


  org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

::: [Search-198][lWVoNoSKQdWZ_TrrpjZX5Q][inet[/198:9300]]{master=true}

27.6% (138.2ms out of 500ms) cpu usage by thread
'elasticsearch[Search-198][search][T#14]'

2/10 snapshots sharing following 14 elements

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)


  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)


  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)


  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)


  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

8/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

25.6% (127.8ms out of 500ms) cpu usage by thread
'elasticsearch[Search-198][search][T#21]'

6/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

2/10 snapshots sharing following 24 elements


  org.elasticsearch.index.search.child.ParentConstantScoreQuery$ChildrenWeight$ChildrenDocIdIterator.match(ParentConstantScoreQuery.java:176)


  org.apache.lucene.search.FilteredDocIdSetIterator.nextDoc(FilteredDocIdSetIterator.java:60)


  org.elasticsearch.index.search.child.ConstantScorer.nextDoc(ConstantScorer.java:48)


  org.elasticsearch.common.lucene.docset.DocIdSets.toCacheable(DocIdSets.java:94)


  org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:73)


  org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:50)


  org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:45)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:128)


  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)


  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)


  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)


  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)


  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

2/10 snapshots sharing following 22 elements


  org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:111)


  org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:142)


  org.apache.lucene.search.BooleanQuery$BooleanWeight.scorer(BooleanQuery.java:311)


  org.apache.lucene.search.FilteredQuery$QueryFirstFilterStrategy.filteredScorer(FilteredQuery.java:612)


  org.elasticsearch.common.lucene.search.XFilteredQuery$CustomRandomAccessFilterStrategy.filteredScorer(XFilteredQuery.java:229)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)


  org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:533)


  org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:133)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)


  org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:173)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

  org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)


  org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:122)


  org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:249)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)


  org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)


  org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

24.4% (122ms out of 500ms) cpu usage by thread
'elasticsearch[Search-198][search][T#10]'

10/10 snapshots sharing following 10 elements

  sun.misc.Unsafe.park(Native Method)

  java.util.concurrent.locks.LockSupport.park(Unknown Source)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.awaitMatch(LinkedTransferQueue.java:706)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.xfer(LinkedTransferQueue.java:615)


  org.elasticsearch.common.util.concurrent.jsr166y.LinkedTransferQueue.take(LinkedTransferQueue.java:1109)


  org.elasticsearch.common.util.concurrent.SizeBlockingQueue.take(SizeBlockingQueue.java:162)

  java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  java.lang.Thread.run(Unknown Source)

I dont know what the problem is.

Plz give some suggests.

Thanks & BRs.

On Saturday, October 25, 2014 7:43:50 PM UTC+7, Jörg Prante wrote:

Please post the result of "hot threads" action on a gist/paste page so we
can understand your problem better.

Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/cluster-nodes-hot-threads.html

Jörg

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d760d84b-f2b9-49d3-99de-6dc7bb95ca1d%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/d760d84b-f2b9-49d3-99de-6dc7bb95ca1d%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHG5doQnGecEUJ8h2HG2Oc5hwFV7%2B%2B6%3Dyew-9ZMevRf2g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.