Is it Java Long.MAX_VALUE (2^63-1)?
I'm wondering how many fields I can have in my documents -- I'd like tens of thousands.
Is it Java Long.MAX_VALUE (2^63-1)?
I'm wondering how many fields I can have in my documents -- I'd like tens of thousands.
I do not know if there is a fixed limit, but I suspect you will run into problems and reach a practical limit before you reach any theoretical one. There is a good reason that the default has been set to a 1000, and I would recommend rethinking your data model if it requires a considerably larger number of fields. Each new field added (unless you specify them all up front) will require the mapping and consequently the cluster state to be updated and distributed across all nodes in the cluster. As the size of the mappings grow, it will use more heap and take longer to update and distribute, causing performance and potentially also stability problems.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.