Request timeout after 30000ms - should I set a higher value?

I was reading through the following post and had some follow up questions:

My issue:

When running long time range queries / more complex queries in Kibana, I receive the following error:

Fatal Error: Courier fetch: Request Timeout after 30000ms

My question:

I'd like to set the following value in kibana.yml to address my issue:

Before: elasticsearch.shardTimeout: 30000 (30 seconds)
After: elasticsearch.shardTimeout: 300000 (5 minutes)

Will I run into any performance issues that could make my system unresponsive / require a reboot? When I run these queries my system's load jumps up to a maximum of 80% CPU utilization or a load average of 4.

I don't mind if the queries take longer than 30 seconds.

A little background on my Elastic Stack setup:

I'm using Elasticsearch, Logstash, Kibana, and Winlogbeat all version 6.4.0.

I have around 10 million / 7.5 GB worth of documents on my single node Elastic Stack setup. The system specifications are 2 CPU / 8GB RAM. I've implemented some methods to improve the performance of Elasticsearch, such as setting JVM Heap to 4GB, preventing memory swapping, etc.

However, since my indices are growing past the 10 million document mark, the performance of my node is beginning to stagger when I run queries with multiple filters (no wildcards) and longer time ranges (over one week). I know this is pretty normal, especially given the system specifications that I'm bound to. I'll upgrade to a cluster / add more CPU and memory later down the line.

Thanks for reading!

Just a quick update. I changed the setting from 30 seconds to 5 minutes (30000 --> 300000) but still have the same error:

Fatal Error: Courier fetch: Request Timeout after 30000ms

Any clue how to change the setting? I've updated the setting, restarted Kibana, but still the error persists. I'm starting to think that it might be related to Nginx.

I've created a file called/etc/nginx/conf.d/proxy-settings.conf with the following contents:

proxy_connect_timeout       300;
proxy_send_timeout          300;
proxy_read_timeout          90m;
send_timeout                300;
 
client_max_body_size        1000m;

https://wiki.ssdt-ohio.org/display/rtd/Adjusting%20nginx-proxy%20Timeout%20Configuration

This seems to have fixed the issue. I'm still a bit concerned since I pushed my system's load average to 7.5 before the query executed successfully.

I guess this is a limitation that should be addressed by adding more / better hardware. I'm kind of running on fumes with 2 CPU / 8 GB RAM for the amount of data that I'm querying. :sweat_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.