Date range query returns no hits

I am trying to get all the hits from a specific index over the past 24 hours. I entered the following command into the kibana dev console, but no hits are being returned.

GET /my-index/_search
{
  "query": {
    "range": {
      "@timestamp": {
        "gte": "now-1d/d"
      }
    }
  }
}

I know the index is healthy, because its "Docs count" in the Index Management tab is increasing. Also, other GET commands that don't specify a date range are able to return hits, but I suspect that it is older hits being returned. This date range issue only started a few days ago, this index was acting total normal before now. I would also like to note that no recent (past 48 hours) hits are being for the index pattern on the kibana discover tab as well. Does any know how I can fix this issue? I would appreciate any and all help.

For troubleshooting purposes I would run the below and it will return the latest records timestamp just to compare if it will fall within your range.

GET /my-index/_search
{
  "size": 1,
  "query": {
    "match_all": {}
  },
  "sort": [
    {
      "@timestamp": {
        "order": "desc"
      }
    }
  ],
  "_source": ["@timestamp"]
}

Hi Aaron,

hmm, I ran the command and the timestamp of the record is 2021-12-27T14:14:17.292Z (Eastern Standard Time I presume). So I guess my index isn't actually getting anymore hits, which is weird because the index health is showing as "green" and the "Docs count" and "Storage size" is increasing at a steady pace, which I took as meaning that records are being added to the index. Would anyone happen to know the meaning behind this disparity?

Thanks

Guess it's possible that the index is receiving new records that don't have an @timestamp field for whatever reason.

I would just your source of what's feeding you the data. If it's Logstash or Beats then check those logs to see if there are any warnings.

I've taken a look at our Logstash logs, and I'm seeing a bunch of these warnings for my index
elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"my-index", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0xcfb514b>], :response=>{"index"=>{"_index"=>"my-index", "_type"=>"_doc", "_id"=>"6s4GB34B21stRv-of0ss", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [1000] has been exceeded"}}}}

So it appears that the record that induced this warning has over 1000 fields. My index should actually have ~300 fields, so I don't know where these extra fields are coming from. Are random fields being added to index hits a known problem?

To see the number of fields that my index has, I ran GET /my-index/_mapping and did a ctrl-f for the word "type", which returned 984 results. There seems to have been a dynamic field blowup for whatever reason. I have now turned off dynamic mapping for my index, but my GET date range queries still aren't returning new records. Should I have does something else after turning off dynamic mapping?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.