Just throwing this out there @siiman and @Ameer_Mukadam about at least one reason why sometimes if you tweak the scheduling things start working when before they looked like they didn't. This might/might not be an explanation but it's good to know nonetheless for the future.
When you create a rule and set the schedule and then run it, the rule by default will look at the @timestamp for its queries to detect things from the scheduler. If your agents such as beats or endgame or etc... are sending events where the @timestamp is off because that computer sending the message has clock skew you're going to start to see this type of behavior where you're sometimes not detecting things between rule runs.
The additional look back time is used to account for some clock skew by intentionally doing a small overlap along with signal de-duplication between the overlaps. We have seen people change this value and then start to see signals occurring when before they might not have.
The worst offenders of the clock skew is that some systems run different log collectors/log proxies which happen to have received a batch of logs with timestamp already set to a past time but it basically processes them to Elastic Search in bursts or periodic times and then the clock skew issue deepens for people.
A feature we have if this is the case is to utilize the timestamp override we have and choose a field that you setup server side in a pipeline processor[1] to add a date time field of when it arrives within ES such as event.ingested:
Refs:

