Kibana Metric Aggregations showing different values

Hi ,
I am applying min metric aggregation to a numeric field which has unix time, it is showing different value as it should show with the data. I have also tried querying elasticsearch
Here is the response query:

curl -XGET 'ip:9200/filebeat-*/_search?pretty' -H 'Content-Type: application/json' -d'
{
"size": 0,
"aggs": {
"group_by_service-status": {
"terms": {
"field": "service-status.keyword"
},
"aggs": {
"min_unixTime": {
"min": {
"field": "unixTime"
}
}
}
}
}
}
'

Here is the response:

{
"took" : 38,
"timed_out" : false,
"_shards" : {
"total" : 15,
"successful" : 15,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 7781,
"max_score" : 0.0,
"hits" :
},
"aggregations" : {
"group_by_service-status" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "DIFF",
"doc_count" : 1246,
"min_unixTime" : {
"value" : 1.0013580322265625E-4
}
},
{
"key" : "START",
"doc_count" : 1142,
"min_unixTime" : {
"value" : 1.522755712E9
}
},
{
"key" : "END",
"doc_count" : 936,
"min_unixTime" : {
"value" : 1.522755712E9
}
}
]
}
}
}

I have attached my Kibana console, where it shows what the min value should be.

I don't exactly get what value you would expect. It sounds like the Kibana one is the "correct" one, but ES returns wrong data but I highly doubt that. Can you do a metrics aggregation with the same Terms aggregation split (on the service-status.keyword field) and use the same min aggregation and show the output of that? I don't see where this table that you posted is actually coming from.

Thanks
Rashmi

Hi,

Thanks for responding. Here is the aggregation applied in Kibana.
kibana-error

The value is 1,522,755,712 whereas the value I am expecting is 1,522,755,689 as shown in the screenshot in the previous post above.

And also as you can see from the query to Elasticsearch, the value for START is 1,522,755,712 which is same in the kibana aggregation but that is wrong.
"key" : "START",
"doc_count" : 1142,
"min_unixTime" : {
"value" : 1.522755712E9

@timroes - can you please help here with this Aggregation question?

Many thanks
Rashmi

Hi Rashmi, a little update, the aggregation is returning some value that does not exist.
Like in the above kibana screenshot where the value is 1,522,755,712, theres no such value in any of the documents in the index. Also I confirmed the same with a smaller data set where while getting metric aggregation returned value which did not exist.

One more issue is that I am not getting correct metric aggregations on unix timestamp values(which i have indexed as float) , but when trying with smaller values like 1000, I am able to get correct metric aggreagtion. I tried with different supported data type i.e int,float,long for unix timestamp but still i am not getting correct output for metric aggregation.

Hi Askhay,

the actual issue could be in the datatype here. You should rather index your timestamp as long or integer values. The way floats (and in general IEEE 754 floating point values used in computers) work, they only store a specific precision. You numbers (or in general timestamp) seem to be large enough to be cut of in the end due to not enough precision.

That would also explain the mismatch you are seeing. Aggregations work on the indexed data, so you get whatever value is indexed (and maybe lost some precision there), whereas Discover (or the _source of any document, returned by a regular search), will show you the original values, that was stored - not the indexed ones. So you would still have the correct values in Discover, but lost some precision due to floating points in the indexed values (and thus aggregations). Btw, that mismatch would also affect search queries, so e.g. range queries on that field would behave the same weird way.

Could you please reindex your data using long instead of float. In general you should never use any floating point, if you know that you won't have decimal values, because you can always run in that precision issue (and in coding in general in some nasty rounding issues, etc.) :slight_smile:

That would also explain why you get correct result with smaller numbers. Since you said you already tried indexing with long are you sure that index process went correctly? Could you paste the mappings for that index with long and also the request/responses, for an aggregation?

Cheers,
Tim

Hi Tim,
I reindexed the data with integer type and the issue was solved.
Could you please help me with this question?

Thanks
Akshay

Glad it solved your issue. Sorry btw for the typo I made in your name in my last response :frowning:

I will give your other question a look.

Cheers,
Tim

Hey Tim,

Thanks for the solution!

Thanks,
Akshay

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.