Container is to small to render the visulization


(swati) #1

Hi,
I am using kibana 4.5.5.I have crated 14 visualization and 3 dashboard,Each dashbaord contains 4,5,5 visualization .While loading I am getting This container is too small to render visualization and loading takes time.

Please suggest some solution to load dashboard quickely.


(Joe Fleming) #2

You've got a lot of high cardinality data there (that is, records with a lot of unique values). This is the reason it is slow, and also the reason that it can't draw the graphs. You probably have a terms agg on a field with a lot of unique values (seems that way from the legend values) and you've set the "size" on the aggregation very high.

Can you share what you have in the query build for those visualizations? And also provide some information about what you are trying to visualize?


(swati) #3

Hi,
I am creating 3 dashboard and 14 visulization.I am loading log files from 4 sources(K2P,K8P,K2Q and K8Q) and 7 servers. Filebeat is running separately on each source.Volume of data is too high.
I am reading three type of file defaultTrace_.trc,applications_.log,security_00.*.log but format of data is same in all the log files.
I am loading data from log files in prd_ErrorMessage field is loading data which is longer than the max length.
I applied "ignore_above": 256 which resolves my timeout error but I don't want to loose data.
ignore_above": 256 will ignore the whole record.
I can see my CPU % is also reaches to 100%.
And while loading dashboard in kibana my dashboard fails to load and I am getting attached error Google chrome ran out of memory while trying to display this webpage.


(Joe Fleming) #4

Swap your aggregations. Put the x-axis/date split first, then the Split Bars/application second.

Elasticsearch slices and dices you data up in the order that you've specified the aggregations. So, first it has to group all records by application, and then it breaks them all apart by date. It's cheaper to group by date first.

Also, in the split bars aggregation, in the advanced section, if you've moved the size too high, you're going to get back way too many records to render. The default is 5.

You mention not losing data, but the reality is, you aren't going to be able to visualize every record you have. Elasticsearch can probably handle the query, but it's slow (as you've noted). Then all that data has to come across the wire, which takes more time. Then the JS interpreter has to convert that huge amount of JSON into Javascript, which takes more time, and consumes a fair amount of memory and a bit of CPU. Then all that data has to be turned into a chart somehow, which is more memory and more CPU. This is why your browser hangs and crashes.

Generally speaking, you shouldn't need the "long tail" of your data, so you can ignore records that are common. For example, if you expect load times of 300ms or less, you don't need to see that all systems are operating within spec, you really only care about systems that out of spec, so you can ignore the "correct" values. This also prevents information overload, where records you don't care about start to blend in with and hide records you need to know about. So what you need to figure out is how to show just the information that matters, which will make the visualizations (and dashboards containing them) more useful, and have the added benefit of being much faster.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.