Performance issue on Kibana

Hi Team,
I have used the Dockerization concepts for my ELK implementation. Please find my ELK node infrastructure as below,
image

I have created around 15 dashboards. Each dashboard has around 10 to 15 visualizations. And loading more than 10K+ records in some visualization. Now, while accessing those dashboards am facing some performance issue (sometimes timeout issue occurring). When I select the more than a 1-day range in kibana it's totally down.

Could someone please help me out to fix this issue.

Regards,
Siva.P

Have you identified what is limiting performance? Is it CPU? Dusk I/O and iowait? GC?

@Christian_Dahlqvist,

Totally I have 2332 shards, 1074 Indices, 492,080,781 documents and am using X-Pack trial pack so monitored the node performance through kibana. While accessing the dashboard the elastic Search Latency is 24.8/s, Index Latency is: 0.23ms and Total shards Search Rate is 28.2/s. And that dashboard took almost 3 mins to load completely.

Also, I have checked the individual docker container CPU, Mem usage as below,

Could see the status in network tab

Could you please help me to analysis more on performance and fixing this issue.

Thanks
Siva.P

@Christian_Dahlqvist, Team,

Could someone please help me out with the issue.

Thanks
Siva.P

Given that you only have two nodes in your cluster, your shard count sounds quite high, which could very well be contributing to your problems. Please read this blog post for some practical guidance on shards and sharding.

It would also be interesting to know what types of visualisations and aggregations you have in your dashboards. You mentioned loading 10k documents in some dashboards. Is this saved searches embedded in the dashboards?

@Christian_Dahlqvist.

Thanks for your prompt response!

I will try to add one more node to handle my shards and at the same time try to reduce the shards. And am not using the saved searches. Mostly I have used the Bar, Data table visualization. Coming to aggregation part Terms, Filters aggregation used in Buckets and TopHits aggregation in Metric.
Here am attaching one of the sample Bar charts. In x-axis, I have some terms and each column trying to showcase the more than 1000 counts in the stacked view. The same way is loading more than 10K data in the data table.

Thanks
Siva.P

What types of aggregations are you using? Are you using scripted fields? What does disk I/O look like while you are querying, e.g. through iostat?

@Christian_Dahlqvist,

Yes, I'm using one only scripted field for achieving the color indications in the data table.

The sample data table as below,
query

Is there any other options/possibility to achieve the color indications ( for avoiding the scripted fields) in Kibana?

And I have used the Term and sum aggregations mostly.

Please find the server infrastructure details in the below and let me know that server CPU/RAM Is that fine to use as a 3rd node?
qery2

Thanks
Siva.P

@Christian_Dahlqvist, Team,

In the elastic end If I increase the index.refresh_interval, threadpool.search.size, Increasing the size of the search queue and indices.memory.index_buffer_size Does it improve my kibana performance?

Thanks in advance!

@Christian_Dahlqvist and Team,

Someone, please help me out with this issue.

Thanks
Siva.P

Probably not.

What does disk I/O and iostat look like when querying? Is disk performance maybe the limiting factor?

@Christian_Dahlqvist

While accessing those dashboard I got the below response for I/O stat.

Linux 3.10.0-693.17.1.el7.x86_64 10/22/2018 x86_64 (16 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
5.97 0.00 1.14 0.06 0.00 92.82
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
dm-8 57.03 50.46 1090.49 616957805 17047994183

Thanks
Siva.P

@Christian_Dahlqvist,
Please find the respective charts request and Query statistics as below,
Data Table Visualization
req2

Bar chart Visualization
req3

Please suggest your solutions.

Thanks in advance!

@Christian_Dahlqvist,

Could you please suggest your thoughts on this.

Thanks
Siva.P

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.