There are many blogs stating that a high number of fields in the mapping leads to a mapping explosion which may cause out-of-memory errors. However, I am unable to understand what exact data structure is causing this memory bloat up and if it is just a memory concern that can be fixed by more RAM, or if it is a fundamental limitation in the system.
We have a use case for storing custom fields for many documents. So each document would have 100s of custom fields and this number can keep increasing. So we aimed to store them as nested documents like so:
{
"custom_fields": [
{
"field_name": "priority",
"keyword_field_value": "p1"
},
{
"field_name": "ingested_at",
"date_field_value": "2024..."
}
]
}
However, this leads to too many documents(as in nested doc is a lucene doc) and hence impacts search latency.
Would it be feasible to increase the field limit to 10K, given we only use static mapping and every addition is intended? Plus, we increase the memory of the appropriate nodes (I believe it is the data node memory that will be impacted).