Thanks for the reply.
Following are some of the follow up queries:
I tested with 25 million records, and found that I can query my data with 1GB of allocated heap.
But, if I increase the records to 30million and then query the data with 1GB of allocated heap, I get the following exception:"
[FIELDDATA] Data too large, data for [bad_score] would be larger than limit of
[623326003/594.4mb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for
[----] would be larger than limit of [623326003/594.4mb]]; }
Please let me know in detail what are the different strategies to avoid this error in the Prod environment, Do we need to keep monitoring the elasticsearch nodes and then scale horizontally as and when required?
Can I assume that if I increase my data n times of 25 million then I also need to increase my heap memory by n of 1GB?