I have few questions related to number of fields present in mapping and addition of new fields dynamically.
What can be the causes of mapping explosions?
Is it the high number of fields(in my case more than 1000 fields) present in the mapping file or huge number of documents present?
Is there any other reasons for mapping explosion?
Your source of data.
An example - getting the keys and values of something like "customer_id": "N343242394638" the wrong way round in your application code would be a good way of generating a lot of unique field names from the keys, all with the same customer_id value string.
@Mark_Harwood Thanks for the response
It is understandable from your reply to avoid generating lot of unique fields names unnecessarily.
But in situations where in, it is not possible to avoid any fields and the number is over 1000, how can we prevent mapping explosion?
Is there any other reasons that can cause mapping explosions?
Also, What can be the consequences of mapping explosions?
By carefully controlling what JSON you pass or, if you can't, by declaring what your indexing policy is for any new fields - ignore, accept or error?
If you're not interested in searching or aggregating certain fields that may appear in your docs you can simply choose to ignore them in your index mappings. They'll still exist in the stored JSON blob but won't be unpacked and added to any kind of index or doc-values storage.
Anything that can introduce new fields into the provided JSON.
Elasticsearch rejections because you exceeded the permitted number of mapped fields. Each mapped field comes with overheads (disk + RAM) so it shouldn't become an unbounded collection.
Thanks for quick replies
To add to my previous doubt,
Is there a fixed value for number of fields that has to be stored? I see the default value is 1000 fields. But I have a situation where I have to store 1500 fields.
Is there any alternative to this mapping explosion prevention?
In the "cluster state" which is shared with every node.
A small part of the overhead is fixed (the set of fields definitions in cluster state) and the larger part varies with the number of documents in the index. More fields = more entries in the search index data structures and RAM-based caches.
Thanks for the response.
I have gone through the provided links, I understand that default limit of 1000 fields.
I have 2500 static fields specified at the time index creation. Is there a specific number of static field that can cause mapping explosion ?
okay Thanks for the reply.
An add on to my previous question,
If there are 2000 fields in my mapping file but the number of documents I am indexing is low (50,000).
Can with such low data a mapping explosion occur?
We may be talking at cross-purposes.
I don't think of "a mapping explosion" as a specific error or event.
I think of it as a general condition of having a lot of fields.
It's a condition that can lead to a number of problems (memory pressure, delays publishing cluster state..) and is the reason we introduced a soft-limit to the number of fields in mappings.
If there are 2000 fields in my mapping file but the number of documents I am indexing is low (50,000).
Sounds like a lot of fields for users to consider/search but shouldn't be too much of a problem.
Thanks @Mark_Harwood for the response. I want to understand few more things. Can you please provide a link of things that can help me understand the following?
How does Mapping work internally within Elastic?
Is it referred to, for every search query?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.