Java.lang.Double cannot be cast to org.apache.lucene.util.BytesRef


(Mahesh Bhat) #1

hi !

I have an ELK system running for Log Analysis. I very recently used the "elapsed" filter plugin in logstash in order to track the time elapsed for the completion of a certain sequence of message events.

At times, I need to sort on the basis of the "elapsed.time" field in order to determine which events took too long for completion. Whenever a sort operation is carried out on this field, the search errors out with the following message :-


[2015-08-20 06:35:30,105][DEBUG][action.search.type ] [hostname_client01] [logstash-2015.08.19][2]: Failed to exe
cute [org.elasticsearch.action.search.SearchRequest@5bfa5386] while moving to second phase
java.lang.ClassCastException: java.lang.Double cannot be cast to org.apache.lucene.util.BytesRef
at org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:902)
at org.apache.lucene.search.TopDocs$MergeSortQueue.lessThan(TopDocs.java:172)
at org.apache.lucene.search.TopDocs$MergeSortQueue.lessThan(TopDocs.java:120)
at org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:225)
at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:133)
at org.apache.lucene.search.TopDocs.merge(TopDocs.java:234)
at org.elasticsearch.search.controller.SearchPhaseController.sortDocs(SearchPhaseController.java:239)
at .........
at .........

Here is a sample of the message containing the "elapsed.time" field :-


{
"_index": "logstash-2015.08.19",
"_type": "abc_eventforwarder",
"_id": "AU9DYWvkrr_mxpPid7mz",
"_score": 1,
"_source": {
"message": "INFO 2015-08-19 00:34:59,977 enrichment_processor on_delivery_confirmation 239 : Received ack for delivery tag: 4719",
"@version": "1",
"@timestamp": "2015-08-19T00:34:59.977Z",
"type": "abc_eventforwarder",
"host": "hostname",
"path": "/var/log/eventforwarder/enrichment_processor.log",
"tags": [
"_grokparsefailure",
"endlag_flag",
"elapsed",
"elapsed.match"
],
"mtype": "INFO",
"mesgid": "4719",
"elapsed.time": 0.582,
"elapsed.timestamp_start": "2015-08-19T00:34:59.482Z"
},
"fields": {
"elapsed.timestamp_start": [
1439944499482
],
"@timestamp": [
1439944499977
]
}
}


I am using Elasticsearch version - 1.6.1

Any ideas on why I see that error ?

Thanks !

  • mahesh.

(Adrien Grand) #2

What likely happened is that you ran a terms aggregation across several indices, and the field was mapped as a double on one index and as a double on another index. We just merged a change that returns a clearer error message in that case.


(system) #3