Force range query to use epoch_second instead of epoch_millies?


(Sjaak) #1

Hi,

I noticed epoch_second can be as much as 200x faster than epoch_millies in Kibana. As millies are not important to me I want all visualizations to use epoch_second instead of epoch_millies but I can't find any way to get this done.

I already create a new index and used this as the mapping for my @datetime field. The original value of that field is yyyy-MM-ddTHH:mm:ss.

"properties": {
  "@datetime": {
    "type": "date",
    "format": "yyyy-MM-dd'T'HH:mm:ss||epoch_second"
  },

However when I create my index Elastic/Kibana seemd to add .000 and the query still shows epoch_millies when looking at the visualization request.

  "range": {
    "@datetime": {
      "gte": 1534396763826,
      "lte": 1542172763826,
      "format": "epoch_millis"
    }

How can I force Elastic to use epoch_seconds? If possible, is there any way to apply this to all existing indeces as well? As I can change the query in the profiler it appears this is done at the query level and doesn't really depend on whatever the actual data in the index might look like?


(Sjaak) #2

Added some ruby code to my logstash to convert @timestamp to epoc in seconds.

Template

"@epoc": {
"type": "date",
"format": "epoch_second"
}

and STILL the range query in Kibana is showing asepoch_millis. Elastic is just ignoring whatever I set.


(Sjaak) #3

Right, so it appears changing the query to epoch_seconds actually made it fail, hence the very quick query times.

But reading this I still think going with seconds might be faster as I have a lot of very small documents.

But for the life of me I can't get Elastic to not convert everything to epoch_millis...