Not a big deal, just an observation
I have a 7.3 stack on a Windows laptop and another 7.3 stack on Ubuntu
Both stacks are ingesting data from a common drive share, so it's the same data
Both are basically configured the same (logstash filters, etc..)
I design dashboards on the laptop and export them to Ubuntu for people to use.
I am using a TSVB visualization and in the panel options I have "Drop last bucket" set to Yes
On Windows the last data point is take a full hour before the last hour, e.g at 16:30 the last data point is 15:00:00.000. On Ubuntu the same graph has the last data point at 16:00:00.000
Both are advertising 'per 60 min' on the graphs
Did I miss some configuration on one of the platforms that causes this, or do they just work slightly differently. Like I said, it's not a big deal, just on observation.
The difference in results should be explainable by looking at the queries sent out by the visualizations. Many of the visualization have an
Inspect feature that lets you see the search query that was generated and sent to Elasticsearch. For the buckets and partial filter dropping, it's not totally surprising that different client machines would show different.
- The way Elasticsearch groups metrics into time buckets depends heavily on the time filter that is in the query
- Differences in the client machines' clocks will for slight differences in the from/to times that potentially changes the set of buckets in the result
- Differences in the client machines' clocks can also mean that one client detects a result bucket as being partial data (the reason for wanting to drop the last bucket), while a different client might not have detected the bucket as being partial data.
Only if you can assume that both machines craft the same search query with the same time range filters, should Elasticsearch return the same results on the 2 machines.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.