@timestamp term tables make dashboard very slow

You'll have to think about the number of aggregation buckets coming over the network and getting parsed in the browser. The Elasticsearch APIs return JSON data, which is parsed as a single string into Javascript objects in the browser. The Javascript runtime environment is a single thread (this is true as of the time writing this, because Kibana doesn't implement web workers), and parsing JSON is a CPU-intensive blocking operation.

So the more buckets in the Elasticsearch result, the larger the data string, and the longer it takes to parse. If you're using timestamp as the term to aggregate on, and you a document for every, say hour, a 90-day search would come up with about 129600 buckets. If you have sub-aggregations (split rows) the amount of data coming back is some multiple of that.

Along with that, Kibana proxies the entire response from Elasticsearch straight into the browser, to help with inspection. It doesn't filter or map the response to reduce the payload size.

It could be that you are trying to represent something like the actual raw documents in a table visualization. Have you thought of instead adding Saved Searches to your dashboard?

If Kibana has no built-in solutions that are appropriate for what you're trying to do, you could also think about building a custom visualization plugin that executes custom searches that using ES tools like source filtering, field collapsing, and composite aggregation to reduce the result data down to a more manageable size. If you can get your data with a query as opposed to an aggregation, or get it from a composite aggregation, the JSON data string will be much smaller as the values in the data will have way less depth.