Thanks for the article, I'll look through that today and see how I can integrate that with the kibana dashboard JSON.
Just to be clear for anyone else who reads this, setting "size" to something much bigger than the number of terms does not appear to cause a performance decrease. In particular, in my example, setting size to 20000 results in a query that takes 22.6 seconds for 15 minutes, and setting size to 1000000 results in a query that takes 21.6 seconds for another 15 minute period. The query response sizes were the same, which is to say that a size of 20000 was large enough to capture my full data set. Notes on methodology: The 15 minute periods were non-overlapping so that caching did not affect the results, but the document counts were within .2% of each other, so they were of comparable size.
The main problem appears to be that the 6.2 update to Kibana replaced the old, fast implementation of the table with a new, inefficient one and it is not a priority to fix that, I guess: see my question here.
I appreciate all the help in troubleshooting.