We are using Elasticsearch to store some time series data which has nano-second precision time stamps. Through use of the date_nanos
datatype we are able to ingest the data into Elasticsearch and plot it in Kibana. However what we see is that aggregations are only at millisecond resolution in Kibana. Zooming into data shows as that all the data is aggregated at most at millisecond granularity. Is there some way to fix this limitation? I understand that the documentation specifies that aggregations are millisecond level, so I wanted to know if workarounds exist that would achieve finer bucket sizes than milliseconds.
Would appreciate any help on the issue
Added aggregations
There is no out of the box support for aggregating date nanos but you can achieve it using runtime fields.
For example you can emit longs from you date nanos and use a histogram aggregation, for example:
PUT my-index-000001
{
"mappings": {
"properties": {
"date": {
"type": "date_nanos"
}
}
}
}
PUT my-index-000001/_bulk?refresh
{ "index" : { "_id" : "1" } }
{ "date": "2024-07-08T00:00:00.000000000Z" }
{ "index" : { "_id" : "2" } }
{ "date": "2024-07-08T00:00:00.000000001Z" }
{ "index" : { "_id" : "3" } }
{ "date": "2024-07-08T00:00:00.000000002Z" }
GET my-index-000001/_search
{
"size": 0,
"runtime_mappings": {
"nanos_as_long": {
"type": "long",
"script": "doc['date'].value.nano)"
}
},
"aggs": {
"nano_hist": {
"histogram": {
"field": "nanos_as_long",
"interval": 1
}
}
}
}
you will need to be careful not to generate too many buckets. Hope this helps.