I have a cluster with multiple elasticsearch nodes and a kibana server
I have not made any modifications to kibana recently, nor the indexing settings but all of a sudden one day I get "X of Y shards failed" and when I look into it the reason is that the "max_result_window" is 10'000, not 50'000
Why is kibana all of a sudden requesting 5 times as large result window ?
As far as I know, Kibana does not change that setting directly and it will honor the actual value set up on the index (the default is 10k, yes)
For example, on this discussion the Maps application is needing read access to that setting.
07:18PM - 18 Dec 19 UTC
02:39PM - 31 Jul 20 UTC
The Maps application uses index settings `max_result_window` and `max_inner_resu
Anyways, that setting is associated with each index and can be changed anytime as documented
here. Maybe you need to take a look at your indices in case this setting has changed on any of them?
If anyone enters this thread with the same issue as me .....
In my case it seems that sometime in the past I have edited the "advanced kibana setting" of "Discover:sampleSize" and restoring that to a lower value fixed this
Same as here
In an ES 6.8.6 cluster, I created an index pattern in Kibana and when I went to discover tab, I got the error
Discover: Result window is too large, from + size must be less than or equal to:  but was . See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting
[Screenshot 2020-11-30 at 11.33.12 PM]
The indices are huge containing millions of docs. But how am I supposed t…
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.