Threads hanging on BooleanScorer.score with 100% cpu

I'm slowly losing processors as elasticsearch threads hang on
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)

It happens with "search_type=dfs_query_then_fetch" and without.
*
*
I'm running v0.90.3

Here is a sample query:

{
"query": {
"bool": {
"should": [
{
"constant_score": {
"query": {
"text": {
"brandName": {
"query": "Paulinie"
}
}
},
"boost": 3
}
},
{
"text": {
"name": {
"query": "BURNOUT RUFFLES"
}
}
},
{
"has_child": {
"score_type": "max",
"query": {
"bool": {
"should": [
{
"dis_max": {
"queries": [
{
"text": {
"color": {
"query": "OFFWHITE",
"boost": 1.5
}
}
},
{
"text": {
"vendors.color": {
"query": "OFFWHITE",
"boost": 1.5
}
}
}
]
}
},
{
"text": {
"size": {
"query": "3T",
"boost": 1.5
}
}
},
{
"text": {
"sku": {
"query": "WD2258PA",
"boost": 5
}
}
},
{
"bool": {
"boost": 10,
"must": [
{
"term": {
"vendors.vendorId": {
"value": 5358
}
}
},
{
"text": {
"vendors.sku": {
"query": "WD2258PA",
"boost": 2
}
}
}
]
}
}
]
}
},
"type": "product"
}
},
{
"fuzzy_like_this": {
"prefix_length": 2,
"like_text": "OFFWHITE",
"max_query_terms": 12,
"fields": [
"name"
]
}
}
]
}
},
"from": 0,
"size": 10
}

~ > curl localhost:9200/_nodes/hot_threads

102.0% (510ms out of 500ms) cpu usage by thread
'elasticsearch[emsweb-02][search][T#16]'
9/10 snapshots sharing following 16 elements
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:282)
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

100.0% (500ms out of 500ms) cpu usage by thread
'elasticsearch[emsweb-02][search][T#62]'
10/10 snapshots sharing following 16 elements
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

Any ideas?

Thanks,
Josh

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

is this 100% reproducible (also with elasticsearch 0.90.5)? If so, can you
create a bug report on github, which includes the documents you indexed,
the mapping you created for the index and your queries (including
dfs_query_then_fetch and other possible settings), so we can reproduce and
check?

Thanks a lot!

--Alex

On Sat, Oct 19, 2013 at 1:57 AM, Josh Canfield joshcanfield@gmail.comwrote:

I'm slowly losing processors as elasticsearch threads hang on
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)

It happens with "search_type=dfs_query_then_fetch" and without.
*
*
I'm running v0.90.3

Here is a sample query:

{
"query": {
"bool": {
"should": [
{
"constant_score": {
"query": {
"text": {
"brandName": {
"query": "Paulinie"
}
}
},
"boost": 3
}
},
{
"text": {
"name": {
"query": "BURNOUT RUFFLES"
}
}
},
{
"has_child": {
"score_type": "max",
"query": {
"bool": {
"should": [
{
"dis_max": {
"queries": [
{
"text": {
"color": {
"query": "OFFWHITE",
"boost": 1.5
}
}
},
{
"text": {
"vendors.color": {
"query": "OFFWHITE",
"boost": 1.5
}
}
}
]
}
},
{
"text": {
"size": {
"query": "3T",
"boost": 1.5
}
}
},
{
"text": {
"sku": {
"query": "WD2258PA",
"boost": 5
}
}
},
{
"bool": {
"boost": 10,
"must": [
{
"term": {
"vendors.vendorId": {
"value": 5358
}
}
},
{
"text": {
"vendors.sku": {
"query": "WD2258PA",
"boost": 2
}
}
}
]
}
}
]
}
},
"type": "product"
}
},
{
"fuzzy_like_this": {
"prefix_length": 2,
"like_text": "OFFWHITE",
"max_query_terms": 12,
"fields": [
"name"
]
}
}
]
}
},
"from": 0,
"size": 10
}

~ > curl localhost:9200/_nodes/hot_threads

102.0% (510ms out of 500ms) cpu usage by thread
'elasticsearch[emsweb-02][search][T#16]'
9/10 snapshots sharing following 16 elements
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:282)
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

100.0% (500ms out of 500ms) cpu usage by thread
'elasticsearch[emsweb-02][search][T#62]'
10/10 snapshots sharing following 16 elements
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

Any ideas?

Thanks,
Josh

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Alex.

I've tried to create a set of documents that would reproduce this. I have
about 5 million docs, so picking the ones that cause the problem is a bit
challenging. Worse, the full set of documents aren't causing the problem on
a test server.

I found a specific query that triggers the 100% CPU usage every time but
only on my production server. I haven't tried 0.90.5

Here is the query that I could make fail every time, the workaround that I
finally came up with is changing the highlighted "must" into "should". With
"must" it hangs, with "should" it worked fine. One variation I tried that
also hung was if I added "minimum_should_match=2" to the modified "should".

The query is dynamically generated based on fields available while
processing an external file. I'll keep trying to come up with a simple way
to reproduce.

Here is the query that consistently hung.
{
"query": {
"bool": {
"should": [
{
"constant_score": {
"query": {
"text": {
"brandName": {
"query": "R.S.V.P. International"
}
}
},
"boost": 3
}
},
{
"text": {
"name": {
"query": [
"Splash Measuring Cup Set"
],
"boost": 3
}
}
},
{
"has_child": {
"score_type": "max",
"query": {
"bool": {
"should": [
{
"bool": {
"boost": 10,

  •                  **"must"*: [
                      {
                        "text": {
                          "vendors.vendorId": {
                            "query": 7072
                          }
                        }
                      },
                      {
                        "text": {
                          "vendors.sku": {
                            "query": [
                              "ACUP-R"
                            ],
                            "boost": 2
                          }
                        }
                      }
                    ]
                  }
                }
              ]
            }
          },
          "type": "product"
        }
      }
    ]
    
    }
    },
    "from": 0,
    "size": 1
    }

On Sunday, October 20, 2013 8:46:53 AM UTC-7, Alexander Reelsen wrote:

Hey,

is this 100% reproducible (also with elasticsearch 0.90.5)? If so, can you
create a bug report on github, which includes the documents you indexed,
the mapping you created for the index and your queries (including
dfs_query_then_fetch and other possible settings), so we can reproduce and
check?

Thanks a lot!

--Alex

On Sat, Oct 19, 2013 at 1:57 AM, Josh Canfield <joshca...@gmail.com<javascript:>

wrote:

I'm slowly losing processors as elasticsearch threads hang on
org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)

It happens with "search_type=dfs_query_then_fetch" and without.
*
*
I'm running v0.90.3

Here is a sample query:

{
"query": {
"bool": {
"should": [
{
"constant_score": {
"query": {
"text": {
"brandName": {
"query": "Paulinie"
}
}
},
"boost": 3
}
},
{
"text": {
"name": {
"query": "BURNOUT RUFFLES"
}
}
},
{
"has_child": {
"score_type": "max",
"query": {
"bool": {
"should": [
{
"dis_max": {
"queries": [
{
"text": {
"color": {
"query": "OFFWHITE",
"boost": 1.5
}
}
},
{
"text": {
"vendors.color": {
"query": "OFFWHITE",
"boost": 1.5
}
}
}
]
}
},
{
"text": {
"size": {
"query": "3T",
"boost": 1.5
}
}
},
{
"text": {
"sku": {
"query": "WD2258PA",
"boost": 5
}
}
},
{
"bool": {
"boost": 10,
"must": [
{
"term": {
"vendors.vendorId": {
"value": 5358
}
}
},
{
"text": {
"vendors.sku": {
"query": "WD2258PA",
"boost": 2
}
}
}
]
}
}
]
}
},
"type": "product"
}
},
{
"fuzzy_like_this": {
"prefix_length": 2,
"like_text": "OFFWHITE",
"max_query_terms": 12,
"fields": [
"name"
]
}
}
]
}
},
"from": 0,
"size": 10
}

~ > curl localhost:9200/_nodes/hot_threads

102.0% (510ms out of 500ms) cpu usage by thread
'elasticsearch[emsweb-02][search][T#16]'
9/10 snapshots sharing following 16 elements

org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)

org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)
unique snapshot

org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:282)

org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

100.0% (500ms out of 500ms) cpu usage by thread
'elasticsearch[emsweb-02][search][T#62]'
10/10 snapshots sharing following 16 elements

org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:278)

org.apache.lucene.search.BooleanScorer.score(BooleanScorer.java:339)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:624)

org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:488)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:444)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281)

org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269)

org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:134)

org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:295)

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:175)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.executeQuery(TransportSearchDfsQueryThenFetchAction.java:144)

org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.run(TransportSearchDfsQueryThenFetchAction.java:131)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
java.lang.Thread.run(Thread.java:662)

Any ideas?

Thanks,
Josh

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Josh,

Indeed, this can be tricky to reproduce since this kind of issues may
depend on how your index is splitted into segments and on the order of
documents in individual segments. Maybe you can try to run this query with
assertions enabled on the JVM to see if this triggers something (running
export JAVA_OPTS="-ea" before starting elasticsearch from the same
terminal would work)? Otherwise, would it be doable to share the faulty
shard?

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Adrien and group.

I tried enabling asserts but none are getting triggered. What I've found is
that somehow the BooleanScorer's bucketTable data structure is getting
corrupted. I'm getting a loop in the linked list so it's never leaving the
score method. I've been able to reproduce this consistently on my test
server now but I don't have a dataset that I can share yet.

This may be a Lucene issue.

On Mon, Oct 21, 2013 at 1:08 AM, Adrien Grand <
adrien.grand@elasticsearch.com> wrote:

Josh,

Indeed, this can be tricky to reproduce since this kind of issues may
depend on how your index is splitted into segments and on the order of
documents in individual segments. Maybe you can try to run this query with
assertions enabled on the JVM to see if this triggers something (running
export JAVA_OPTS="-ea" before starting elasticsearch from the same
terminal would work)? Otherwise, would it be doable to share the faulty
shard?

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Tue, Oct 22, 2013 at 12:45 AM, Josh Canfield joshcanfield@gmail.comwrote:

Hi Adrien and group.

I tried enabling asserts but none are getting triggered. What I've found
is that somehow the BooleanScorer's bucketTable data structure is getting
corrupted. I'm getting a loop in the linked list so it's never leaving the
score method. I've been able to reproduce this consistently on my test
server now but I don't have a dataset that I can share yet.

This may be a Lucene issue.

Indeed, there is a bug somewhere. My guess would be either the boolean
scorer or maybe one of its child scorers breaks a contract on some API
(maybe more likely?). One last option would be that it is a JVM bug, in
which case it'd be useful to check that you can reproduce with Java 1.7
u25, which has no known bug related to Lucene I think (or at least it has
fewer bugs than other JVMs).

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Ok, I've found and fixed the defect and created a ticket.

Josh

On Wed, Oct 23, 2013 at 9:09 AM, Adrien Grand <
adrien.grand@elasticsearch.com> wrote:

On Tue, Oct 22, 2013 at 12:45 AM, Josh Canfield joshcanfield@gmail.comwrote:

Hi Adrien and group.

I tried enabling asserts but none are getting triggered. What I've found
is that somehow the BooleanScorer's bucketTable data structure is getting
corrupted. I'm getting a loop in the linked list so it's never leaving the
score method. I've been able to reproduce this consistently on my test
server now but I don't have a dataset that I can share yet.

This may be a Lucene issue.

Indeed, there is a bug somewhere. My guess would be either the boolean
scorer or maybe one of its child scorers breaks a contract on some API
(maybe more likely?). One last option would be that it is a JVM bug, in
which case it'd be useful to check that you can reproduce with Java 1.7
u25, which has no known bug related to Lucene I think (or at least it has
fewer bugs than other JVMs).

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Wed, Oct 23, 2013 at 11:58 PM, Josh Canfield joshcanfield@gmail.comwrote:

Ok, I've found and fixed the defect and created a ticket.

has_child can cause an infinite loop (100% CPU) when used in bool query · Issue #3955 · elastic/elasticsearch · GitHub

Wow, I'm so happy that you managed to track it down! The fix looks good, I
just assigned the issue to myself and will take some more time tomorrow to
test your fix and see if we can prevent this kind of error from happening
again in the future.

Thank you again!

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Wow, I'm so happy that you managed to track it down! The fix looks good, I
just assigned the issue to myself and will take some more time tomorrow to
test your fix and see if we can prevent this kind of error from happening
again in the future.

Thank you again!

Me too! My users were getting frustrated with the poor results that
no-scoring was giving :slight_smile:

On Wed, Oct 23, 2013 at 3:38 PM, Adrien Grand <
adrien.grand@elasticsearch.com> wrote:

On Wed, Oct 23, 2013 at 11:58 PM, Josh Canfield joshcanfield@gmail.comwrote:

Ok, I've found and fixed the defect and created a ticket.

has_child can cause an infinite loop (100% CPU) when used in bool query · Issue #3955 · elastic/elasticsearch · GitHub

Wow, I'm so happy that you managed to track it down! The fix looks good, I
just assigned the issue to myself and will take some more time tomorrow to
test your fix and see if we can prevent this kind of error from happening
again in the future.

Thank you again!

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.