Aggregation shows 0 instead of actual value

Hey,

I'm having a mysterious issue with my Kibana visualizations. For some reason the CPU and memory graphs flatline to 0 after a specific date. The data is there, though. Searching for it in Discover shows non-0 values for both metrics. The data is sent from metricbeats, creating one index per day. The memory graph flatlined over 1 day and then came back, the CPU graph has been at 0 for the last couple of days.

These are the graphs:
CPU
CPU

Memory
Memory

CPU data in Discover
Discover_CPU

Query generated by the Kibana Visualization
{"index":["metrics-*"],"ignore_unavailable":true,"preference":1536218298985} {"size":0,"_source":{"excludes":[]},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"3h","time_zone":"Europe/Berlin","min_doc_count":1},"aggs":{"3":{"terms":{"field":"beat.hostname.keyword","size":5,"order":{"1":"desc"}},"aggs":{"1":{"max":{"field":"system.process.cpu.total.norm.pct"}}}}}}},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["@timestamp","system.process.cpu.start_time"],"query":{"bool":{"must":[{"query_string":{"query":"metricset.name:process AND system.process.cmdline:\"/usr/bin/dotnet /opt/upc/bin/Mg.Public.RequestResponse.dll\"","analyze_wildcard":true,"default_field":"*"}},{"range":{"@timestamp":{"gte":1535787966153,"lte":1536219966153,"format":"epoch_millis"}}}],"filter":[],"should":[],"must_not":[]}}}

Query response (truncated)
{ "key_as_string": "2018-09-03T21:00:00.000+02:00", "key": 1536001200000, "doc_count": 2160, "3": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "ip-10-153-99-177.ec2.internal", "doc_count": 1080, "1": { "value": 0.0024999999441206455 } }, { "key": "ip-10-153-99-101.ec2.internal", "doc_count": 1080, "1": { "value": 0.0020000000949949026 } } ] } }, { "key_as_string": "2018-09-04T00:00:00.000+02:00", "key": 1536012000000, "doc_count": 2160, "3": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "ip-10-153-99-177.ec2.internal", "doc_count": 1080, "1": { "value": 0.0024999999441206455 } }, { "key": "ip-10-153-99-101.ec2.internal", "doc_count": 1080, "1": { "value": 0.0020000000949949026 } } ] } }, { "key_as_string": "2018-09-04T03:00:00.000+02:00", "key": 1536022800000, "doc_count": 2160, "3": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "ip-10-153-99-101.ec2.internal", "doc_count": 1080, "1": { "value": 0.0 } }, { "key": "ip-10-153-99-177.ec2.internal", "doc_count": 1080, "1": { "value": 0.0 } } ] } }, { "key_as_string": "2018-09-04T06:00:00.000+02:00", "key": 1536033600000, "doc_count": 2160, "3": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "ip-10-153-99-101.ec2.internal", "doc_count": 1080, "1": { "value": 0.0 } }, { "key": "ip-10-153-99-177.ec2.internal", "doc_count": 1080, "1": { "value": 0.0 } } ] } },

Cluster status is green, and all indices report as green. It seems to me as if the aggregation fails on one or multiple indices, but I get no error details. Where do I start looking for the error? What could be the issue?

Best Regards,
Mikael Selander

This issue was caused by not having proper index templates set up. ES would apply a best-guess data type, which for some indices would cause truncation of values to 0.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.