High CPU load during search(elasticsearch 1.2.1)

Hi, Everyone. I have no huge experience with elsticsearch but meet
performance problem on our environment and need somehow to resolve it or
figure out what is the problem. Some background: we have multiple
environments with the same configuration and only one have this issue. We
using elastic search 1.2.1. We doing daily job and it fire each one or two
seconds or more often search requests to es. First time we doing it, it
finished successfully in 3h next day we fire it and it finished in 6 hours
and next time it not finished in 9 hours after that we performing restart
and everything became normal, but than it happens again in again.
Symptoms:

  1. We have 16 core CPU and all cores are 90% loaded. after job finished
    usage dropped to 4%.

  2. Memory consumption in JVM not more than 50% at that moment

  3. We have 70 search threads and only 20 of them working at that moment

  4. Attached result of 2 hotthreads request from different days.

  5. Here is JVM value from node stats:
    jvm: {

    • timestamp: 1409838172173
    • uptime_in_millis: 93828800
    • mem: {
      • heap_used_in_bytes: 4393576864
      • heap_used_percent: 27
      • heap_committed_in_bytes: 9959833600
      • heap_max_in_bytes: 16225468416
      • non_heap_used_in_bytes: 236540072
      • non_heap_committed_in_bytes: 317325312
      • pools: {
        • young: {
          • used_in_bytes: 1825734112
          • max_in_bytes: 5950865408
          • peak_used_in_bytes: 5853216768
          • peak_max_in_bytes: 6062997504
            }
        • survivor: {
          • used_in_bytes: 16890200
          • max_in_bytes: 16908288
          • peak_used_in_bytes: 1046461992
          • peak_max_in_bytes: 1268252672
            }
        • old: {
          • used_in_bytes: 2550952552
          • max_in_bytes: 12169117696
          • peak_used_in_bytes: 6651904936
          • peak_max_in_bytes: 12169117696
            }
            }

Realy appreciate any advices how to handle this issue and ready provide any
data.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

In your hot threads dump, you see the culprit, it has something to do with
a plugin you use, not with Elasticsearch.

com.clarabridge.elasticsearch.facet.sampling

Ask the people who provided you with this software.

Jörg

On Thu, Sep 4, 2014 at 4:51 PM, Anton A zer0orama@gmail.com wrote:

Hi, Everyone. I have no huge experience with elsticsearch but meet
performance problem on our environment and need somehow to resolve it or
figure out what is the problem. Some background: we have multiple
environments with the same configuration and only one have this issue. We
using Elasticsearch 1.2.1. We doing daily job and it fire each one or two
seconds or more often search requests to es. First time we doing it, it
finished successfully in 3h next day we fire it and it finished in 6 hours
and next time it not finished in 9 hours after that we performing restart
and everything became normal, but than it happens again in again.
Symptoms:

  1. We have 16 core CPU and all cores are 90% loaded. after job finished
    usage dropped to 4%.

  2. Memory consumption in JVM not more than 50% at that moment

  3. We have 70 search threads and only 20 of them working at that moment

  4. Attached result of 2 hotthreads request from different days.

  5. Here is JVM value from node stats:
    jvm: {

    • timestamp: 1409838172173
    • uptime_in_millis: 93828800
    • mem: {
      • heap_used_in_bytes: 4393576864
      • heap_used_percent: 27
      • heap_committed_in_bytes: 9959833600
      • heap_max_in_bytes: 16225468416
      • non_heap_used_in_bytes: 236540072
      • non_heap_committed_in_bytes: 317325312
      • pools: {
        • young: {
          • used_in_bytes: 1825734112
          • max_in_bytes: 5950865408
          • peak_used_in_bytes: 5853216768
          • peak_max_in_bytes: 6062997504
            }
        • survivor: {
          • used_in_bytes: 16890200
          • max_in_bytes: 16908288
          • peak_used_in_bytes: 1046461992
          • peak_max_in_bytes: 1268252672
            }
        • old: {
          • used_in_bytes: 2550952552
          • max_in_bytes: 12169117696
          • peak_used_in_bytes: 6651904936
          • peak_max_in_bytes: 12169117696
            }
            }

Realy appreciate any advices how to handle this issue and ready provide
any data.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH6%2BM3xcD6tcOXudeWPRYj3jhGj6wDUg-gZ-%2B%2BAF9i8fg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi, Jörg. Thanks for replay, as I said before we using this on multiply
instances and met this problem only one particular one. I removed this
plugin and checked again, this won't help. Here is hotthreads from this
run. I realy appretiate if you suggest next steps what we can look.
Thanks.

четверг, 4 сентября 2014 г., 20:51:30 UTC+3 пользователь Jörg Prante
написал:

In your hot threads dump, you see the culprit, it has something to do with
a plugin you use, not with Elasticsearch.

com.clarabridge.elasticsearch.facet.sampling

Ask the people who provided you with this software.

Jörg

On Thu, Sep 4, 2014 at 4:51 PM, Anton A <zer0...@gmail.com <javascript:>>
wrote:

Hi, Everyone. I have no huge experience with elsticsearch but meet
performance problem on our environment and need somehow to resolve it or
figure out what is the problem. Some background: we have multiple
environments with the same configuration and only one have this issue. We
using Elasticsearch 1.2.1. We doing daily job and it fire each one or two
seconds or more often search requests to es. First time we doing it, it
finished successfully in 3h next day we fire it and it finished in 6 hours
and next time it not finished in 9 hours after that we performing restart
and everything became normal, but than it happens again in again.
Symptoms:

  1. We have 16 core CPU and all cores are 90% loaded. after job finished
    usage dropped to 4%.

  2. Memory consumption in JVM not more than 50% at that moment

  3. We have 70 search threads and only 20 of them working at that moment

  4. Attached result of 2 hotthreads request from different days.

  5. Here is JVM value from node stats:
    jvm: {

    • timestamp: 1409838172173
    • uptime_in_millis: 93828800
    • mem: {
      • heap_used_in_bytes: 4393576864
      • heap_used_percent: 27
      • heap_committed_in_bytes: 9959833600
      • heap_max_in_bytes: 16225468416
      • non_heap_used_in_bytes: 236540072
      • non_heap_committed_in_bytes: 317325312
      • pools: {
        • young: {
          • used_in_bytes: 1825734112
          • max_in_bytes: 5950865408
          • peak_used_in_bytes: 5853216768
          • peak_max_in_bytes: 6062997504
            }
        • survivor: {
          • used_in_bytes: 16890200
          • max_in_bytes: 16908288
          • peak_used_in_bytes: 1046461992
          • peak_max_in_bytes: 1268252672
            }
        • old: {
          • used_in_bytes: 2550952552
          • max_in_bytes: 12169117696
          • peak_used_in_bytes: 6651904936
          • peak_max_in_bytes: 12169117696
            }
            }

Realy appreciate any advices how to handle this issue and ready provide
any data.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e0c315cb-6f90-4270-88a5-61788502a0a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Jorg,

Anton is right we removed the plugin and double checked ES is taking up our
bulk of the time. We do see that number of evictions are high

filter_cache: {
memory_size_in_bytes: 10508060
evictions: 0
}
id_cache: {
memory_size_in_bytes: 276840500
}
fielddata: {
memory_size_in_bytes: 416181852
evictions: 9842
}

the fielddata cache is set to 40%. Do you think the number of evictions we
have are the cause of low performance ? If yes, how can we reduce it ?

On Friday, September 5, 2014 4:34:41 AM UTC-4, Anton A wrote:

Hi, Jörg. Thanks for replay, as I said before we using this on multiply
instances and met this problem only one particular one. I removed this
plugin and checked again, this won't help. Here is hotthreads from this
run. I realy appretiate if you suggest next steps what we can look.
Thanks.

четверг, 4 сентября 2014 г., 20:51:30 UTC+3 пользователь Jörg Prante
написал:

In your hot threads dump, you see the culprit, it has something to do
with a plugin you use, not with Elasticsearch.

com.clarabridge.elasticsearch.facet.sampling

Ask the people who provided you with this software.

Jörg

On Thu, Sep 4, 2014 at 4:51 PM, Anton A zer0...@gmail.com wrote:

Hi, Everyone. I have no huge experience with elsticsearch but meet
performance problem on our environment and need somehow to resolve it or
figure out what is the problem. Some background: we have multiple
environments with the same configuration and only one have this issue. We
using Elasticsearch 1.2.1. We doing daily job and it fire each one or two
seconds or more often search requests to es. First time we doing it, it
finished successfully in 3h next day we fire it and it finished in 6 hours
and next time it not finished in 9 hours after that we performing restart
and everything became normal, but than it happens again in again.
Symptoms:

  1. We have 16 core CPU and all cores are 90% loaded. after job finished
    usage dropped to 4%.

  2. Memory consumption in JVM not more than 50% at that moment

  3. We have 70 search threads and only 20 of them working at that moment

  4. Attached result of 2 hotthreads request from different days.

  5. Here is JVM value from node stats:
    jvm: {

    • timestamp: 1409838172173
    • uptime_in_millis: 93828800
    • mem: {
      • heap_used_in_bytes: 4393576864
      • heap_used_percent: 27
      • heap_committed_in_bytes: 9959833600
      • heap_max_in_bytes: 16225468416
      • non_heap_used_in_bytes: 236540072
      • non_heap_committed_in_bytes: 317325312
      • pools: {
        • young: {
          • used_in_bytes: 1825734112
          • max_in_bytes: 5950865408
          • peak_used_in_bytes: 5853216768
          • peak_max_in_bytes: 6062997504
            }
        • survivor: {
          • used_in_bytes: 16890200
          • max_in_bytes: 16908288
          • peak_used_in_bytes: 1046461992
          • peak_max_in_bytes: 1268252672
            }
        • old: {
          • used_in_bytes: 2550952552
          • max_in_bytes: 12169117696
          • peak_used_in_bytes: 6651904936
          • peak_max_in_bytes: 12169117696
            }
            }

Realy appreciate any advices how to handle this issue and ready provide
any data.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6f1f043f-9b50-475f-b0d0-45fec51abba0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

The filter cache, which is very fast, is almost not used, only 10m.

The field data cache is highly used, with 416m. But 416m is no problem for
ES.

The "hot threads" show that almost all threads are busy with calculating
scores. This is typical for complex queries. Computing relevancy for scores
means that all docs have to be visited in the result set, but this can
often be optimized by rewriting queries.

I assume you use just queries and very few filters. Maybe you can rewrite
queries to use filters? This will give a huge performance boost.

Jörg

On Fri, Sep 5, 2014 at 3:04 PM, PShah perks5@gmail.com wrote:

Hi Jorg,

Anton is right we removed the plugin and double checked ES is taking up
our bulk of the time. We do see that number of evictions are high

filter_cache: {
memory_size_in_bytes: 10508060
evictions: 0
}
id_cache: {
memory_size_in_bytes: 276840500
}
fielddata: {
memory_size_in_bytes: 416181852
evictions: 9842
}

the fielddata cache is set to 40%. Do you think the number of evictions we
have are the cause of low performance ? If yes, how can we reduce it ?

On Friday, September 5, 2014 4:34:41 AM UTC-4, Anton A wrote:

Hi, Jörg. Thanks for replay, as I said before we using this on multiply
instances and met this problem only one particular one. I removed this
plugin and checked again, this won't help. Here is hotthreads from this
run. I realy appretiate if you suggest next steps what we can look.
Thanks.

четверг, 4 сентября 2014 г., 20:51:30 UTC+3 пользователь Jörg Prante
написал:

In your hot threads dump, you see the culprit, it has something to do
with a plugin you use, not with Elasticsearch.

com.clarabridge.elasticsearch.facet.sampling

Ask the people who provided you with this software.

Jörg

On Thu, Sep 4, 2014 at 4:51 PM, Anton A zer0...@gmail.com wrote:

Hi, Everyone. I have no huge experience with elsticsearch but meet
performance problem on our environment and need somehow to resolve it or
figure out what is the problem. Some background: we have multiple
environments with the same configuration and only one have this issue. We
using Elasticsearch 1.2.1. We doing daily job and it fire each one or two
seconds or more often search requests to es. First time we doing it, it
finished successfully in 3h next day we fire it and it finished in 6 hours
and next time it not finished in 9 hours after that we performing restart
and everything became normal, but than it happens again in again.
Symptoms:

  1. We have 16 core CPU and all cores are 90% loaded. after job finished
    usage dropped to 4%.

  2. Memory consumption in JVM not more than 50% at that moment

  3. We have 70 search threads and only 20 of them working at that moment

  4. Attached result of 2 hotthreads request from different days.

  5. Here is JVM value from node stats:
    jvm: {

    • timestamp: 1409838172173
    • uptime_in_millis: 93828800
    • mem: {
      • heap_used_in_bytes: 4393576864
      • heap_used_percent: 27
      • heap_committed_in_bytes: 9959833600
      • heap_max_in_bytes: 16225468416
      • non_heap_used_in_bytes: 236540072
      • non_heap_committed_in_bytes: 317325312
      • pools: {
        • young: {
          • used_in_bytes: 1825734112
          • max_in_bytes: 5950865408
          • peak_used_in_bytes: 5853216768
          • peak_max_in_bytes: 6062997504
            }
        • survivor: {
          • used_in_bytes: 16890200
          • max_in_bytes: 16908288
          • peak_used_in_bytes: 1046461992
          • peak_max_in_bytes: 1268252672
            }
        • old: {
          • used_in_bytes: 2550952552
          • max_in_bytes: 12169117696
          • peak_used_in_bytes: 6651904936
          • peak_max_in_bytes: 12169117696
            }
            }

Realy appreciate any advices how to handle this issue and ready provide
any data.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6f1f043f-9b50-475f-b0d0-45fec51abba0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6f1f043f-9b50-475f-b0d0-45fec51abba0%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEqF8-T7%2B1cwpknPWDhmW8ZAhYetJEfq%3D%3Dy02eFVW%2Bhrg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi Jörg,

The problem we are trying to describe here is that why the query
performance degrades over time. If I restart my service, the performance
comes back to normal but after some days the same queries run slower. I am
not sure if changing the queries would help in that case. We even
experimented with simple queries and it all slows down.

On Sunday, September 7, 2014 9:36:30 AM UTC-4, Jörg Prante wrote:

The filter cache, which is very fast, is almost not used, only 10m.

The field data cache is highly used, with 416m. But 416m is no problem for
ES.

The "hot threads" show that almost all threads are busy with calculating
scores. This is typical for complex queries. Computing relevancy for scores
means that all docs have to be visited in the result set, but this can
often be optimized by rewriting queries.

I assume you use just queries and very few filters. Maybe you can rewrite
queries to use filters? This will give a huge performance boost.

Jörg

On Fri, Sep 5, 2014 at 3:04 PM, PShah <per...@gmail.com <javascript:>>
wrote:

Hi Jorg,

Anton is right we removed the plugin and double checked ES is taking up
our bulk of the time. We do see that number of evictions are high

filter_cache: {
memory_size_in_bytes: 10508060
evictions: 0
}
id_cache: {
memory_size_in_bytes: 276840500
}
fielddata: {
memory_size_in_bytes: 416181852
evictions: 9842
}

the fielddata cache is set to 40%. Do you think the number of evictions
we have are the cause of low performance ? If yes, how can we reduce it ?

On Friday, September 5, 2014 4:34:41 AM UTC-4, Anton A wrote:

Hi, Jörg. Thanks for replay, as I said before we using this on multiply
instances and met this problem only one particular one. I removed this
plugin and checked again, this won't help. Here is hotthreads from this
run. I realy appretiate if you suggest next steps what we can look.
Thanks.

четверг, 4 сентября 2014 г., 20:51:30 UTC+3 пользователь Jörg Prante
написал:

In your hot threads dump, you see the culprit, it has something to do
with a plugin you use, not with Elasticsearch.

com.clarabridge.elasticsearch.facet.sampling

Ask the people who provided you with this software.

Jörg

On Thu, Sep 4, 2014 at 4:51 PM, Anton A zer0...@gmail.com wrote:

Hi, Everyone. I have no huge experience with elsticsearch but meet
performance problem on our environment and need somehow to resolve it or
figure out what is the problem. Some background: we have multiple
environments with the same configuration and only one have this issue. We
using Elasticsearch 1.2.1. We doing daily job and it fire each one or two
seconds or more often search requests to es. First time we doing it, it
finished successfully in 3h next day we fire it and it finished in 6 hours
and next time it not finished in 9 hours after that we performing restart
and everything became normal, but than it happens again in again.
Symptoms:

  1. We have 16 core CPU and all cores are 90% loaded. after job
    finished usage dropped to 4%.

  2. Memory consumption in JVM not more than 50% at that moment

  3. We have 70 search threads and only 20 of them working at that moment

  4. Attached result of 2 hotthreads request from different days.

  5. Here is JVM value from node stats:
    jvm: {

    • timestamp: 1409838172173
    • uptime_in_millis: 93828800
    • mem: {
      • heap_used_in_bytes: 4393576864
      • heap_used_percent: 27
      • heap_committed_in_bytes: 9959833600
      • heap_max_in_bytes: 16225468416
      • non_heap_used_in_bytes: 236540072
      • non_heap_committed_in_bytes: 317325312
      • pools: {
        • young: {
          • used_in_bytes: 1825734112
          • max_in_bytes: 5950865408
          • peak_used_in_bytes: 5853216768
          • peak_max_in_bytes: 6062997504
            }
        • survivor: {
          • used_in_bytes: 16890200
          • max_in_bytes: 16908288
          • peak_used_in_bytes: 1046461992
          • peak_max_in_bytes: 1268252672
            }
        • old: {
          • used_in_bytes: 2550952552
          • max_in_bytes: 12169117696
          • peak_used_in_bytes: 6651904936
          • peak_max_in_bytes: 12169117696
            }
            }

Realy appreciate any advices how to handle this issue and ready
provide any data.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6f1f043f-9b50-475f-b0d0-45fec51abba0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6f1f043f-9b50-475f-b0d0-45fec51abba0%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/33ce9d29-69c0-4f78-aa27-ea625da590e0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

This is indicative of some sort of resource leak. I checked heap, CPU,
nothing seems to stand out.

Hi Jörg,

The problem we are trying to describe here is that why the query
performance degrades over time. If I restart my service, the performance
comes back to normal but after some days the same queries run slower. I am
not sure if changing the queries would help in that case. We even
experimented with simple queries and it all slows down.

On Sunday, September 7, 2014 9:36:30 AM UTC-4, Jörg Prante wrote:

The filter cache, which is very fast, is almost not used, only 10m.

The field data cache is highly used, with 416m. But 416m is no problem
for ES.

The "hot threads" show that almost all threads are busy with calculating
scores. This is typical for complex queries. Computing relevancy for scores
means that all docs have to be visited in the result set, but this can
often be optimized by rewriting queries.

I assume you use just queries and very few filters. Maybe you can rewrite
queries to use filters? This will give a huge performance boost.

Jörg

On Fri, Sep 5, 2014 at 3:04 PM, PShah per...@gmail.com wrote:

Hi Jorg,

Anton is right we removed the plugin and double checked ES is taking up
our bulk of the time. We do see that number of evictions are high

filter_cache: {
memory_size_in_bytes: 10508060
evictions: 0
}
id_cache: {
memory_size_in_bytes: 276840500
}
fielddata: {
memory_size_in_bytes: 416181852
evictions: 9842
}

the fielddata cache is set to 40%. Do you think the number of evictions
we have are the cause of low performance ? If yes, how can we reduce it ?

On Friday, September 5, 2014 4:34:41 AM UTC-4, Anton A wrote:

Hi, Jörg. Thanks for replay, as I said before we using this on multiply
instances and met this problem only one particular one. I removed this
plugin and checked again, this won't help. Here is hotthreads from this
run. I realy appretiate if you suggest next steps what we can look.
Thanks.

четверг, 4 сентября 2014 г., 20:51:30 UTC+3 пользователь Jörg Prante
написал:

In your hot threads dump, you see the culprit, it has something to do
with a plugin you use, not with Elasticsearch.

com.clarabridge.elasticsearch.facet.sampling

Ask the people who provided you with this software.

Jörg

On Thu, Sep 4, 2014 at 4:51 PM, Anton A zer0...@gmail.com wrote:

Hi, Everyone. I have no huge experience with elsticsearch but meet
performance problem on our environment and need somehow to resolve it or
figure out what is the problem. Some background: we have multiple
environments with the same configuration and only one have this issue. We
using Elasticsearch 1.2.1. We doing daily job and it fire each one or two
seconds or more often search requests to es. First time we doing it, it
finished successfully in 3h next day we fire it and it finished in 6 hours
and next time it not finished in 9 hours after that we performing restart
and everything became normal, but than it happens again in again.
Symptoms:

  1. We have 16 core CPU and all cores are 90% loaded. after job
    finished usage dropped to 4%.

  2. Memory consumption in JVM not more than 50% at that moment

  3. We have 70 search threads and only 20 of them working at that
    moment

  4. Attached result of 2 hotthreads request from different days.

  5. Here is JVM value from node stats:
    jvm: {

    • timestamp: 1409838172173
    • uptime_in_millis: 93828800
    • mem: {
      • heap_used_in_bytes: 4393576864
      • heap_used_percent: 27
      • heap_committed_in_bytes: 9959833600
      • heap_max_in_bytes: 16225468416
      • non_heap_used_in_bytes: 236540072
      • non_heap_committed_in_bytes: 317325312
      • pools: {
        • young: {
          • used_in_bytes: 1825734112
          • max_in_bytes: 5950865408
          • peak_used_in_bytes: 5853216768
          • peak_max_in_bytes: 6062997504
            }
        • survivor: {
          • used_in_bytes: 16890200
          • max_in_bytes: 16908288
          • peak_used_in_bytes: 1046461992
          • peak_max_in_bytes: 1268252672
            }
        • old: {
          • used_in_bytes: 2550952552
          • max_in_bytes: 12169117696
          • peak_used_in_bytes: 6651904936
          • peak_max_in_bytes: 12169117696
            }
            }

Realy appreciate any advices how to handle this issue and ready
provide any data.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/ee66e9cc-1f7c-498b-b67e-e896f5656c18%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6f1f043f-9b50-475f-b0d0-45fec51abba0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6f1f043f-9b50-475f-b0d0-45fec51abba0%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3add7bca-ea4b-4f4c-a3ab-abd27967949a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.