How to fix index_out_of_bounds_exception exception in aggregation in painless script

I have what I believe to be an entirely normal index where 1% of the documents don't have a field present. I'd like to write a painless script that references that field, but whenever I do I get "Courier fetch: X of Y shards failed."

The response reads:
"reason": { "type": "script_exception", "reason": "runtime error", "script_stack": [ "java.nio.Buffer.checkIndex(Buffer.java:540)", "java.nio.DirectByteBuffer.get(DirectByteBuffer.java:253)", "org.apache.lucene.store.ByteBufferGuard.getByte(ByteBufferGuard.java:118)", "org.apache.lucene.store.ByteBufferIndexInput$SingleBufferImpl.readByte(ByteBufferIndexInput.java:385)", "org.apache.lucene.util.packed.DirectReader$DirectPackedReader1.get(DirectReader.java:86)", "org.apache.lucene.codecs.lucene70.Lucene70DocValuesProducer$19.ordValue(Lucene70DocValuesProducer.java:865)", "org.apache.lucene.index.SingletonSortedSetDocValues.advanceExact(SingletonSortedSetDocValues.java:83)", "org.elasticsearch.index.fielddata.FieldData$10.advanceExact(FieldData.java:377)", "org.elasticsearch.index.fielddata.ScriptDocValues$BinaryScriptDocValues.setNextDocId(ScriptDocValues.java:588)", "org.elasticsearch.index.fielddata.ScriptDocValues$Strings.setNextDocId(ScriptDocValues.java:623)", "org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:94)", "org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:39)", "if (doc[fieldname] != null) {", " ^---- HERE" ], "script": "String safeGet(def doc, String fieldname) { if (doc[fieldname] != null) {return doc[fieldname].value;} else {return ' ';} } return safeGet(doc,'portal') + ' ' + safeGet(doc,'event')", "lang": "painless", "caused_by": { "type": "index_out_of_bounds_exception", "reason": null }

I've also tried contains_key() and .size() > 0 checks, neither of which worked for me.

I am using a JSON input field in a visualization in Elastic.co hosted Kibana 6.3.1 .

I am aware that this question has been asked here many times before. I don't believe that any of those questions were answered to the satisfaction of the questioner, and I have tried all of the proposed "solutions", all of which return the same error.

On closer examination, this same error also occurs for fields that always exist.

The query Kibana is sending to elasticsearch is:
{
"size": 0,
"_source": {
"excludes": []
},
"aggs": {
"2": {
"date_histogram": {
"field": "relevant_at",
"interval": "1d",
"time_zone": "Pacific/Auckland",
"min_doc_count": 1
},
"aggs": {
"3": {
"terms": {
"field": "ticket_group_name",
"size": 5,
"order": {
"_count": "desc"
},
"script": "doc['ticket_group_name'].value + ' ' + doc['ticket_type_name'].value;"
}
}
}
}
},
"stored_fields": [
"*"
],
"script_fields": {},
"docvalue_fields": [
"@timestamp",
"identity_extra.from_re",
"purchased_on",
"relevant_at",
"start_datetime"
],
"query": {
"bool": {
"must": [
{
"range": {
"relevant_at": {
"gte": 1533856413951,
"lte": 1534461213951,
"format": "epoch_millis"
}
}
}
],
"filter": [
{
"match_all": {}
}
],
"should": [],
"must_not": []
}
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.