I know how to set the total field value on an index (ES 5.x) (without to increase the default limit or decrease it) and to also use a template to apply that setting on newly created indices. However, I'm trying to better understand the ways to limit the number of fields in order not to hit the default setting. I've read some documentation, but still feel that I don't have a clear picture on how to best handle the situation. Any help is appreciated.
It looks like this is handle by the index settings index.mapping.total_fields.limit
. So you should be able to bump that value. Just understand that more fields is more overhead and that sparse fields cause trouble. So raise it with caution. Bumping into the limit is likely a sign that you are doing something that isn't going to work well with Elasticsearch in the future.
Nik,
Thanks for the reply and I guess I wasn't clear in my question, so I edited it a bit to hopefully clarify it. I do know how to increase that value, however I prefer not to, and was just wondering about ways to handle things in order not to have that issue.
Don't make so many fields. Personally I don't like dynamic mapping. I set "dynamic": false
which will store new fields but not index them. Then I can carefully decide which fields to add. Other folks with other use cases like "dynamic": "strict"
which will reject changes that add new fields.
The usual strategies for making fewer fields is to combine similar ones or to use key/value objects and nested fields. key/value objects with nested fields are much slower to query than regular fields, but they don't have the sparsity storage problems. I prefer to try and lay out the data not to have so many fields. But I don't know your use case so I can't really give you any hints on how to do that.
Thanks for the info Nik. I tried doing this on an existing index (have daily logs go to a daily index) that was still receiving data but the error about reaching the limit in the elasticsearch log continue which I'm guessing is because the limit has already been reached and so stopping the dynamic indexing now won't change much. I'll try setting it up as a template so that the index created tomorrow should have it. Would that stop the errors in the elasticsearch logs or do I need to do anything else. I was also wondering whether I can clear the indexed fields on the existing index to test things out
The error I still receive when I try to update the index setting is the following:
failed to put mappings on indices [[[logstash-2017.05.31/MIUnLlV7Qsq_oe0GElLEfQ]]], type [fluentd]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [logstash-2017.05.31] has been exceeded
After setting index.mapper.dynamic to false and applying it a new index, I now get:
org.elasticsearch.index.query.QueryShardException: No mapping found for [@timestamp] in order to sort on
I tried adding @timestamp to the index, but that appears to have been removed in 5.0 version and looking around online, it looks like the recommendations is to configure an ingest pipeline. Is that the right route or would it better to simply re-enable dynamic mapping and create separate indices for each data source?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.