When trying to index documents into an Elasticsearch 6.8.22 instance, I am seeing error messages like this:
{'index': {'_index': 'data_explorer', '_type': 'flywheel', '_id': '64ff3d667ed067064bf7bd7e', 'status': 400, 'error': {'type': 'illegal_argument_exception', 'reason': 'Limit of total fields [15000] in index [data_explorer] has been exceeded'}}, 'partition_id': 'jey3j3sDdrQV', 'shard_id': None, 'path': {'group': 'parker', 'project': '64ff3b877ed067064bf7bd1b'}}
However, if I hit the /data_explorer endpoint to get information about this index, I see that it has fewer than 11500 total fields. Why is it reporting that it has reached the 15000 field limit when it has fewer than 11500 fields? Is my process for checking the index information missing some fields?
This is a very old version of elastic, first recommendation is to update it.
With that said, the mapping types are depreceated and also they are removed with Elasticsearch 8.x.
I have a suspicion that if you get the mappings for /data_explorer/_mapping/flywheel instead of just /data_explorer you will see there are more mappings and thus more fields on the index. Could you please try that?
We are planning to upgrade to ES 7 in the next few months, and 8 at some point after that. Nonetheless, this behavior isn't a known issue with ES 6.8.22, is it? I tried the /data_explorer/_mapping/flywheel endpoint as you suggested. The resulting JSON is slightly different, but I still see the exact same number of fields.
Do multi-fields count as multiple fields toward the field limit?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.