Timelion won't allow for splits greater than a certain size

I noticed that Timelion will give me this error:

Timelion: Error: in cell #1: [search_phase_execution_exception]

in all of my queries if I enter a value into the split field that is larger than some arbitrary threshold. If I lower the number, my query works. I'm not sure why this is happening or if anyone has encountered this before but I noticed that identical queries in the Kibana Visualize tool for line graphs are able to handle splits of a larger size than Timelion. Additionally, (separate topic), I'm not sure if there has been a feature enabled where the legend can be placed outside of the chart and have its own scroll feature.

Hi @lhoang
I think you have entered a query/split that produce an high number of buckets, more than the 10.000 configured by default in elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket.html

if you can watch take a look at your ES logs you will probably see an error like the following:

path: /_all/_search, params: {ignore_throttled=true, index=_all, timeout=30000ms}
   │      org.elasticsearch.action.search.SearchPhaseExecutionException:
   │      	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:305) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.action.search.FetchSearchPhase$1.onFailure(FetchSearchPhase.java:91) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:757) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
   │      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
   │      	at java.lang.Thread.run(Thread.java:835) [?:?]
   │      Caused by: org.elasticsearch.search.aggregations.MultiBucketConsumerService$TooManyBucketsException: Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.
   │      	at org.elasticsearch.search.aggregations.MultiBucketConsumerService$MultiBucketConsumer.accept(MultiBucketConsumerService.java:110) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.InternalAggregation$ReduceContext.consumeBucketsAndMaybeBreak(InternalAggregation.java:83) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.bucket.histogram.InternalDateHistogram.addEmptyBuckets(InternalDateHistogram.java:407) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.bucket.histogram.InternalDateHistogram.doReduce(InternalDateHistogram.java:449) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.InternalAggregation.reduce(InternalAggregation.java:135) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:123) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.bucket.terms.InternalTerms$Bucket.reduce(InternalTerms.java:142) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.doReduce(InternalTerms.java:286) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.InternalAggregation.reduce(InternalAggregation.java:135) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:123) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.bucket.filter.InternalFilters$InternalBucket.reduce(InternalFilters.java:101) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.bucket.filter.InternalFilters.doReduce(InternalFilters.java:230) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.InternalAggregation.reduce(InternalAggregation.java:135) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:123) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:490) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.action.search.SearchPhaseController.reducedQueryPhase(SearchPhaseController.java:404) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.action.search.SearchPhaseController$1.reduce(SearchPhaseController.java:699) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:101) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:86) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
   │      	... 6 more

You can change the search.max_buckets settings in the elasticsearch.yml file.

May I ask you the number of split you are currently try to compute? Just because I'd like to warn that more series you add to the same chart more difficult it will be to read it.

And unfortunately not, in timelion you can only visualize the legend inside the chart.

I am asking for 50+ splits and it is strange that I can do it in the visualize tool but not with Timelion.

The main difference is that timelion on his date_histogram aggregation use extended_bounds

 extended_bounds: {
          min: tlConfig.time.from,
          max: tlConfig.time.to
        },

thats extends/force the buckets to the full range of the time:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html#search-aggregations-bucket-histogram-aggregation-extended-bounds

On visualize, instead, we don't use that because the visualization itself doesn't need all the buckets to represent the data.

So for example this is a query that comes from timelion:

GET kibana_sample_data_logs/_search
{
    "query": {
      "bool": {
        "must": [
          {
            "range": {
              "@timestamp": {
                "gte": "2019-03-30T17:32:20.691Z",
                "lte": "2019-06-28T16:32:20.691Z",
                "format": "strict_date_optional_time"
              }
            }
          }
        ],
        "filter": {
          "bool": {
            "must": [],
            "filter": [
              {
                "match_all": {}
              }
            ],
            "should": [],
            "must_not": []
          }
        }
      }
    },
    "aggs": {
      "q": {
        "meta": {
          "type": "split"
        },
        "filters": {
          "filters": {
            "*": {
              "query_string": {
                "query": "*"
              }
            }
          }
        },
        "aggs": {
          "geo.src": {
            "meta": {
              "type": "split"
            },
            "terms": {
              "field": "geo.src",
              "size": 60
            },
            "aggs": {
              "time_buckets": {
                "meta": {
                  "type": "time_buckets"
                },
                "date_histogram": {
                  "field": "@timestamp",
                  "interval": "12h",
                  "time_zone": "Europe/Rome",
                  "extended_bounds": {
                    "min": 1553967140691,
                    "max": 1561739540691
                  },
                  "min_doc_count": 0
                },
                "aggs": {
                  "sum(bytes)": {
                    "sum": {
                      "field": "bytes"
                    }
                  }
                }
              }
            }
          }
        }
      }
    },
    "size": 0
  }

this will throw the ES exception I list on the previous post.
if you remove the extended_bounds part it will return without any problem.

If you want you can issue a feature request to remove that parameter from the query: https://github.com/elastic/kibana/issues/new?template=Feature_request.md

Is there a way to remove this extended bounds parameter myself? And if so, will doing so make Timelion interact in the same way with the query splits?

Unfortunately no, I don't think you can remove it right away, that parameter is currently required to by timelion to correctly visualize the data.
If you want to take a look it's specified here: https://github.com/elastic/kibana/blob/3d380d199c6679484f30da198d35c59a8e5ef420/src/legacy/core_plugins/timelion/server/series_functions/es/lib/create_date_agg.js#L31-L34

You can open a feature request to change that behaviour to be aligned with the one from standard line charts: https://github.com/elastic/kibana/issues/new?template=Feature_request.md

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.