Will I run into any performance issues that could make my system unresponsive / require a reboot? When I run these queries my system's load jumps up to a maximum of 80% CPU utilization or a load average of 4.
I don't mind if the queries take longer than 30 seconds.
A little background on my Elastic Stack setup:
I'm using Elasticsearch, Logstash, Kibana, and Winlogbeat all version 6.4.0.
I have around 10 million / 7.5 GB worth of documents on my single node Elastic Stack setup. The system specifications are 2 CPU / 8GB RAM. I've implemented some methods to improve the performance of Elasticsearch, such as setting JVM Heap to 4GB, preventing memory swapping, etc.
However, since my indices are growing past the 10 million document mark, the performance of my node is beginning to stagger when I run queries with multiple filters (no wildcards) and longer time ranges (over one week). I know this is pretty normal, especially given the system specifications that I'm bound to. I'll upgrade to a cluster / add more CPU and memory later down the line.
Just a quick update. I changed the setting from 30 seconds to 5 minutes (30000 --> 300000) but still have the same error:
Fatal Error: Courier fetch: Request Timeout after 30000ms
Any clue how to change the setting? I've updated the setting, restarted Kibana, but still the error persists. I'm starting to think that it might be related to Nginx.
I've created a file called/etc/nginx/conf.d/proxy-settings.conf with the following contents:
This seems to have fixed the issue. I'm still a bit concerned since I pushed my system's load average to 7.5 before the query executed successfully.
I guess this is a limitation that should be addressed by adding more / better hardware. I'm kind of running on fumes with 2 CPU / 8 GB RAM for the amount of data that I'm querying.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.