Hi,
As far as i know the keywords are created automatically with 256.
Problem is that when i do some group by term with that field some are missing because they are a bit above 256.
I can add the mapping with 300, but how to change the default?
I have in logstash:
index => "mylogs-%{+YYYY.MM.dd}"
So every day a new index will be create, how to create that index already with 300?
Thanks,
@cjabrantes
As far as i know the keywords are created automatically with 256.
If the type of the field is keyword, you can specify ignore_above to not index values larger than N characters. If not specified N defaults to ~2Billion characters. But the keyword sub field of an analyzed field uses ignore_above 256 as default.
If the field is analyzed tokenizer can restrict tokens to max length. The StandardAnalyzer has max_token_length property that defaults to 255.
Problem is that when i do some group by term with that field some are missing because they are a bit above 256.
Can you post your index mapping and a query you are running?
Hi,
This is the mapping created for the field when logstash sends the date to elastic, i just send log.
"log" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
this is working as expected according to:
By default a string now is mapped to text and keyword, so the question is how to change the default size of the keyword, is that an option in elastic? or do i have to "explicitly say it" in logstash?
If in logstash what is the correct why of doing it? In mudate convert i dont see any keyword.
Thanks,
One way you can achieve this is by defining template for these indices. This example shows a single named field. You can use dynamic mapping for multiple (or all text) fields.
{
"template": "mylogs-*",
"mappings": {
"properties": {
"log": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 300
}
}
}
}
},
"aliases": {}
}
Nice, works as i pretend!
Thanks!