I understand that the default index number is 1000, and that increasing it can work in the short term, but is not a viable long term solution, especially if there is a spike in the ingestion of data.
I haven't seen any methods / workaround to this issue. What is the best way to handle explosion mapping? How do I do this?
I did take a look at that link before, but it tells me how Elastic prevents explosion mapping. What I'm looking for is ways to work around this issue, apart from increasing the limit size
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.