Hi all,
I have a short question, which might be very stupid indeed.
I have a document that got 34321 byte of lenght. So I get a response like
response=>{"index"=>{"_index"=>"kafka-test-car-2018.01.15", "_type"=>"logs", "_id"=>"AWD6HJIWT8X2jMCM5ROz", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Document contains at least one immense term in field=\"message\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[123, 34, 104, 101, 97, 100, 101, 114, 115, 34, 58, 123, 34, 97, 101, 95, 116, 114, 97, 99, 107, 105, 110, 103, 34, 58, 34, 73, 68, 58]...', original message: bytes can be at most 32766 in length; got 34321", "caused_by"=>{"type"=>"max_bytes_length_exceeded_exception", "reason"=>"max_bytes_length_exceeded_exception: bytes can be at most 32766 in length; got 34321"}}}}
The content of this very large field (named "message") does not need to be searchable, but I want to index it anyway. According to kibana it is of type String.
My template looks like this:
{
"template" : "test-",
"settings" : {
"index.refresh_interval" : "15s",
"number_of_shards" : 10,
"number_of_replicas" : 0
},
"mappings" : {
"default" : {
"_all" : {"enabled" : true},
"dynamic_templates" : [ {
"string_fields" : {
"match" : "",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "not_analyzed", "omit_norms" : true
}
}
} ],
"properties" : {
"@version": { "type": "string", "index": "not_analyzed" },
"geoip" : {
"type" : "object",
"dynamic": true,
"properties" : {
"location" : { "type" : "geo_point" }
}
},
"message":{
"type" : "string",
"index": "not_analyzed" ,
"ignore_above": 32700
}
}
}
}
}
But I still get the above error. What am I missing here?
Thanks in advance!
Anna
Edit: I am using logstash 5.2 and elasticsearch 5.2