Number of fields in my index is shooting above the limit (1000). I don't want to increase the limit any further (to avoid mapping explosion) . But rather planning on to cut down on the number of fields. There are fields something like (Not exactly same):
It seems like nested fields may be an option, but does visualisations work if i do something like this?
last_name: { jacob:
susan:
patrice:
george:
}
If not, what's the way to reduce the number of fields without essentially deleting any fields and should still be able to query and visualise everything?
Why would you choose to structure your data that way? Why not just use "last_name": "Smith","first_name":"Susan" or something similar? Having fields created dynamically like that will generally always cause mapping explosion and be inefficient.
This will allow you to search on combinations of first and last name, although Kibana largely lacks support for handling nested documents. On the other hand having very large mappings is not handled well in Kibana either....
No, I do not think upgrading will solve this issue for you. One option might be to denormalise and still one user per document, but as I do not know your use-case I can't tell whether that is an option or not. You can follow the discussion about support for nested documents in this GitHub issue (although there may also be others)
As mappings are loaded and processed in the browser, having very large mappings can slow things down considerably, so if possible I would recommend rethinking the data model.
I have a followup question, till i rethink my data-model and discuss it with my team, I want to increase the field limit dynamically.
Is there a way to dynamically change the fields limit? like, if number of fields increases to 1400, the limit should adapt to that without giving any error.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.