The batch query for rendering a visualization from the search programname:"proxy-server" AND status_int:[200 TO 299] and request_method:GET
. The search works fine even with 7days period. But the visualization as below always return error.
{"batch":[{"request":{"params":{"index":"logstash-*","body":{"aggs":{"2":{"significant_terms":{"field":"object.keyword","size":100}}},"size":0,"fields":[{"field":"@timestamp","format":"date_time"},{"field":"rsyslog.timestamp","format":"date_time"}],"script_fields":{},"stored_fields":["*"],"_source":{"excludes":[]},"query":{"bool":{"must":[{"query_string":{"query":"programname:\"proxy-server\" AND status_int:[200 TO 299] and request_method:GET","analyze_wildcard":true,"time_zone":"UTC"}}],"filter":[{"match_all":{}},{"range":{"rsyslog.timestamp":{"gte":"2021-04-20T09:01:03.889Z","lte":"2021-04-20T10:01:03.889Z","format":"strict_date_optional_time"}}}],"should":[],"must_not":[]}}},"preference":1618905798661}},"options":{}}]
Here's the Elasticsearch stats
- The ELK stack is deployed via helm chart in K8S.
- All nodes(pods) are in dlmr roles.
- The diskIO looks fine. There're more than 80 NMVEs.
- I'm trying to find the bottleneck but no luck. It seems the performance issue for me but not sure if it's from data node role. If yes, how to prove that?
- Is there any useful logs for this situation?
Thanks // Hugo