KQL to filter for documents where a field exists and its value is less than a fixed number?

In Kibana 7.7.1, is it possible to specify a KQL filter that selects only the documents where both of the following conditions are true:

  • Field XYZ exists
  • The value of field XYZ (the same field) is less than some fixed numeric value (say, 14400)

The following KQL filter:

CPCHRMSU:* and CPCHRMSU<14400

seems not to work. What am I missing? Do I somehow specify the field name only once in the filter, with those two conditions?

Hi @GrahamHannington

That syntax looks correct so that should give you the results you want, so not sure where the problem might be here.

Do each of those clauses work as expected when they are by themselves? (I mean does CPCHRMSU: * only give you docs with any value and CPCHRMSU < 14400 only give you docs below that value?)

Yes.

I should offer some more context here, because I suspect it might be significant, and I think it's me that's missing something: I'm looking at TSVB gauge visualizations developed by a colleague. I want to omit the gauges displayed with the values 0 and 14400. Currently, I'm applying a filter on the Data > Options tab rather than the Panel filter under Panel options.

This by itself:

CPCHRMSU<14400

works as expected and desired: the gauges with the value 14400 are no longer displayed. However, some of the gauges show the value 0 (zero). I can find no documents that contain the CPCHRMSU field with the value 0. I can find documents where the CPCHRMSU field does not exist.

This by itself:

CPCHRMSU:*

works as expected and desired: gauges with the value 0 are no longer displayed.

However, the combination that I originally cited:

CPCHRMSU:* and CPCHRMSU<14400

omits gauges with the value 14400, but displays gauges with the value 0. CPCHRMSU>0 and CPCHRMSU<14400—that is, with >0 instead of :0—displays identical results: the unwanted 0-value gauges remain.

Some more context: the viz uses the Top Hit aggregation (Size: 1, Aggregate with: Max). I've read the doc topic for top_hits, including the tip "We do not recommend using top_hits as a top-level aggregation". I didn't develop this viz: I'm posting to this topic primarily to seek advice and educate myself so that I can talk to the developer about it. I wonder whether the Top Hit aggregation is somehow responsible what I've interpreted here so far as a problem with the filter. But my thinking is not clear enough on this point to understand what's going on.

The following Lucene (as opposed to KQL) filter:

_exists_:CPCHRMSU AND CPCHRMSU:<14400

still displays 0-value gauges :frowning_face:.

I understand that I probably need to start performing some searches directly myself; say, using curl, so I can better understand what's really going on here in terms of the search and the results returned.

Is it possible that the display value of your field is not the same as the stored value? For example, if the display value is a duration, then there is a difference between the stored value in nanoseconds vs the display value in milliseconds.

I appreciate the thought, but both values are in seconds.

Displayed value in the gauge viz (with a trailing s for seconds, as specified by the Template field value {{value}}s):

14,400s

Stored value (in _source):

"CPCHRMSU": 14400

The corresponding index pattern identifies this field as type number.

Trying to understand this (I'm still familiarizing myself with both the underlying data and the already-developed viz) is starting to do my head in.

I've just noticed that the "Data timerange mode" is set to "Last value".

More disclosure: the metric for the gauges specifies a Group by Terms, By a (text) field (let's call it xyzName). For some xyzName field values, there are documents with CPCHRMSU field values in other time intervals in the time range, but there is no document for that value in the last time interval.

I could still be wrong about this, but such term values seem to be the ones that are displayed with bogus 0-value gauges. I have no idea what to do about this.

You probably want to change the date timerange mode. I recently added a recommendation to the tsvb docs that identifies this as a common source of confusion. If you see data in the Time Series chart but not the Gauge, then that's the cause.

I really appreciate you sticking with me here.

No, that's not the case. Or at least, not quite.

With the following data filter:

CPCHRMSU:* and CPCHRMSU < 14400

in the TSVB gauges, I'm seeing these figures:

xyzName    Gauge value
ss27                 0
ss25                 0
ss23                 0
ss22             8,993
ss21             8,583

Interestingly, by contrast, the legend of the TSVB Time Series viz shows no value (a blank) for ss27, ss25, and ss23, not zero.

To recap, and also to add further detail:

  • Aggregration: Max (was: Top Hit, but I wanted to rule out possible issues with that agg)
  • Data timerange mode: Last value
  • Interval: 100s
  • Drop last bucket? Yes

The time range ends at 2021-01-16T09:55:00.000Z

Based on the "last value" timerange mode, an interval of 100s, and dropping the last bucket, the time value "in play" here is 2021-01-16T09:53:20.000Z (end of time range, minus 100s: that is, 1 minute 40 seconds).

I've done some more research using Discover in Kibana, outside of the TSVB viz editor.

Using the following filters:

  • xyzName is one of: ss27, ss25, ss23, ss22, ss21
  • utcTimestamp (the time filter field for this index pattern) is: 2021-01-16T09:53:20.000Z

I get the following results in Discover:

xyzName    CPCHRMSU
ss27         14,400
ss25         14,400
ss23         14,400
ss22          8,993
ss21          8,583

If I add the filter:

  • CPCHRMSU is not 14,400

I get the results:

xyzName    CPCHRMSU
ss22          8,993
ss21          8,583

which is what I'd expected.

What I don't understand is why, with a data filter that includes CPCHRMSU < 14400, would ss27, ss25, and ss23 appear at all in the TSVB visualizations; especially, as 0-value gauges.

While this is starting to smell like a TSVB-related bug to me, I acknowledge that smell might just be the egg on my face that I don't realize is there.

What am I missing?

I'll admit, I'm in way over my head. Having said that, based on what I've seen in the HTTP API responses for the TSVB (in my browser's Developer Tools), there's a bug here.

Without the data filter < 14400, the data response for label ss27 includes the following penultimate item (corresponding, I think, to the time value 2021-01-16T09:53:20.000Z):

[1610790700000, 14400]

With the data filter < 14400:

[1610790700000, null]

I should add here that, with the data filter < 14400, the response contains label values only for those 5 values of xyzName that I've cited. Without the data filter < 14400, I get a heap more.

Based on a closer inspection of the data response, I think I can see what's going on here.

For those additional xyzName values that appear in the response without the data filter < 14400, every item in the response data array has the (CPCHRMSU) value 14400. With the data filter < 14400 applied, those additional xyzName values do not appear in the response at all (no matching "label" value for them).

However, ss23, ss25, and ss27 have various values of CPCHRMSU. With the data filter < 14400, results for those xyzName label values are returned; furthermore, for each data item where the CPCHRMSU value is 14400, rather than returning the value 14400, the response contains the value null (as shown above).

I'm not saying that the null value is itself a bug. The bug is what the TSVB visualization does with that value. In particular, the way that the TSVB gauge viz displays that null value as a 0-value gauge. The correct behavior is for there to be no gauge for that label.

To summarize: I think I've identified a bug. Am I correct?

For the sake of clarity—while hardly believing my own temerity :slightly_smiling_face:, having begun this topic with a question—I make the following statement:

If you use the following combination of Kibana features:

  • TSVB
  • Data filter
  • Data timerange mode: Last value

then you should treat with suspicion any zero (0) values in the resulting visualizations.

Why? Because those displayed zero values might be caused by null values in the HTTP API response data, where null represents a field value that has been excluded by the data filter.

(I'm not sure the aggregation is significant here: I've used the Top Hit and Max aggregrations, with similar results.)

There's a lot to respond to here, and I may have missed something in all the text. It is totally possible that you are running into a bug, but I am not convinced that you are seeing a bug based on what you've shared.

Elasticsearch in general does some unintuitive things with dates, and TSVB is also a bit unintuitive, so I will try to explain what I think could be happening.

  1. Filters are applied to documents in Elasticsearch, not aggregated values. You said you have "documents with various values" and it sounds like your query is matching those values.

  2. The Terms aggregation is the top-level query that TSVB runs, so it matches any value in your time range, including values before and after the current window. This is why you might see null values in the specific range you're looking at.

  3. Your time interval is set to 100 seconds, which is rounded down per the ES docs. This means that your attempt to create an equivalent query in Discover might not have been exactly correct.

  4. You've chosen to "drop last bucket", which means that the ending time range is not going to be included in the query. This is most useful when dealing with ranges like "last 7 days" where you don't have complete data for the current data, but you probably don't need this.

In conclusion, I think your configuration is the problem, not TSVB. I think that you should:

a. Change the data timerange mode to "entire timerange"
b. Stop dropping the last bucket

Yes, understood.

I debated whether or not to post that report of my observations before a test case, but, at the risk of too much info, I thought it was worthwhile documenting the evolution of this topic from question to bug description; and then, possibly, to smackdown by someone on the Elastic dev team :slightly_smiling_face:.

Here's a small test case.

Data:

{"timestamp":"2021-02-02T09:55:00.000Z","name":"Alpha","field1":14400}
{"timestamp":"2021-02-02T09:53:20.000Z","name":"Alpha","field1":14400}
{"timestamp":"2021-02-02T09:51:40.000Z","name":"Alpha","field1":14400}
{"timestamp":"2021-02-02T09:55:00.000Z","name":"Bravo","field1":100}
{"timestamp":"2021-02-02T09:53:20.000Z","name":"Bravo","field1":14400}
{"timestamp":"2021-02-02T09:51:40.000Z","name":"Bravo","field1":300}
{"timestamp":"2021-02-02T09:55:00.000Z","name":"Charlie","field1":100}
{"timestamp":"2021-02-02T09:53:20.000Z","name":"Charlie","field1":200}
{"timestamp":"2021-02-02T09:51:40.000Z","name":"Charlie","field1":300}

About the data:

  • 9 documents spanning the time range 2021-02-02T09:51:40.000Z to 2021-02-02T09:55:00.000Z
  • 3 timestamp values, 100 seconds apart
  • 3 values of the name field: Alpha, Bravo, Charlie
  • 1 field, field1, to visualize
  • All 3 documents for Alpha have the field1 value 14400
  • The penultimate document for Bravo has the field1 value 14400
  • The penultimate document for Charlie has the field1 value 200

Discover in Kibana confirms that these documents have been successfully ingested. Discover with that time range shows all 9 documents.

TSVB viz details:

  • Aggregation: Max
  • Group by: Terms, By: name
  • Data timerange mode: Last value
  • Interval: 100s
  • Drop last bucket? Yes

Gauges displayed with no data filter:

Alpha 14,400
Bravo 14,400
Charlie 200

Gauges displayed with data filter field1 < 14400:

Bravo 0 :warning: this is the bug
Charlie 200

API response data, with data filter field1 < 14400, for Bravo:

[1612259500000, 300],
[1612259600000, null]

for Charlie:

[1612259500000, 300],
[1612259600000, 200]

where:

  • Unix timestamp value 1612259500000 corresponds to 2021-02-02T09:51:40.000Z
  • Unix timestamp value 1612259600000 corresponds to 2021-02-02T09:53:20.000Z

The absence of the third (latest) timestamp matches the "Drop last bucket" option.

The TSVB gauge displays the null value for Bravo as the value 0 (zero).

This is a bug.

With that data filter, there should be no gauge for Bravo.

Please let me know if you agree or disagree. If you disagree, I'd be grateful to know why.

No.

While I don't understand why the original developer of this viz chose to drop the last bucket (I'll ask), I do understand why they chose "Last value". The value of the field being visualized changes over time based on a rolling 4-hour period. For a given time range, the most recent value of that field is of particular interest. An aggregation over the entire time range might be of some interest, but, in this case, users are most likely to be interested in the last value.

This is primarily caused by the order of aggregations. The query TSVB runs is basically this:

must: [{
  range: {
    @timestamp: { gte, lt }
  }
}, {
  range: {
    field1: { lt: 14400 }
  }
}],
aggs: {
  group_by: {
    terms: { field: "field1", size: 5 },
    aggs: {
      dates: {
        date_histogram: { field: "@timestamp",  fixed_interval: "100s" }
      }
    }
  }
}

To put this in plain language, what this query does is:

  1. Matches documents in Elasticsearch that match both the time range and other filters. From your data, this matches:
{"timestamp":"2021-02-02T09:55:00.000Z","name":"Bravo","field1":100}
{"timestamp":"2021-02-02T09:51:40.000Z","name":"Bravo","field1":300}
{"timestamp":"2021-02-02T09:55:00.000Z","name":"Charlie","field1":100}
{"timestamp":"2021-02-02T09:53:20.000Z","name":"Charlie","field1":200}
{"timestamp":"2021-02-02T09:51:40.000Z","name":"Charlie","field1":300}
  1. Finds all the values of name given the documents above. From those documents, it's Bravo and Charlie

  2. For each value of name, split the documents into date buckets. This results in something like this:

Bravo:
  bucket1: {"timestamp":"2021-02-02T09:55:00.000Z","name":"Bravo","field1":100}
  bucket2: {"timestamp":"2021-02-02T09:51:40.000Z","name":"Bravo","field1":300}
Charlie:
  bucket 1: {"timestamp":"2021-02-02T09:55:00.000Z","name":"Charlie","field1":100}
  bucket 2: {"timestamp":"2021-02-02T09:53:20.000Z","name":"Charlie","field1":200}
  bucket 3: {"timestamp":"2021-02-02T09:51:40.000Z","name":"Charlie","field1":300}
  1. Apply the TSVB settings to the split documents above, which shows bucket 3. But even though bucket 3 only matches values from Charlie, Bravo is still a top-level key that exists in your dataset so we show it.

The reason step 4 is not a bug is that we have found that most users want to see all of the named entities in their data, and I believe that we have the right default already. We do sometimes add extra settings to TSVB, and maybe what you are looking for would fall into that category.

Thanks for your detailed explanation. Sincerely appreciated.

However, "This is how it works" is never a good reason for doing something wrong.

Displaying a numeric value without underlying numeric data for that value is wrong.

That is what TSVB is doing here. It's displaying the numeric value 0 without underlying numeric data to support that value.

Furthermore, because of the order of processing, it's not doing that consistently. Here, I'm referring to consistency from the user's perspective—what the user sees: the displayed visualizations—as opposed to the consistency of the underlying processing.

With reference to the example data I provided:

  • Alpha does not appear in the results of the initial query (that you cited) because all of its field1 values are 14400
  • Bravo appears because some of its field1 values are not 14400

I deliberately specified the data for Alpha and Bravo with this difference, to highlight the inconsistency in the visualizations displayed by TSVB.

Alpha and Bravo have the same field1 value, 14400, for the timestamp in question. However, TSVB displays a gauge for Bravo, but not for Alpha. This is the inconsistency to which I am referring.

My position:

  • Displaying a gauge for Bravo, with the bogus value 0, is a bug
  • Not displaying a gauge for Alpha is correct behavior

You might argue: "No, there is no inconsistency here. The underlying processing is consistent—the same steps are performed in both cases—as I have already explained to you". You would be missing my point: from the user's perspective, for the timestamp in question, both Alpha and Bravo have the same field1 value, 14400. However, TSVB treats them differently. Why? Because all of Alpha's field1 values are 14400, whereas only some of Bravo's field1 values are 14400. That's a poor reason; especially in the context of the "last value" option, where the user is interested only in the value for a particular time interval.

Do you still maintain that there is no bug here, even after I have pointed out to you the difference in behavior—the visualizations displayed—for Alpha and Bravo?

In that case: do you consider the omission of a gauge for the "named entity" Alpha to be a bug? Do you think that it, too, should appear with a (bogus!) 0-value gauge, as TSVB displays for Bravo?

What I am looking for is a solution that displays a numeric value only where there is underlying numeric data to support that value.

(How) do you expect users to distinguish between:

  • A gauge that displays the value 0 without underlying numeric data (that is, based on a null field value in the API response)
  • A gauge that displays the value 0 based on a calculation (aggregation) of actual numeric field values

Or do you think such a distinction is insignificant?

Your responses so far confirm the validity of the statement that I made earlier:

I invite your rebuttal of that statement. I don't want it to be true.

I ran this example data in TSVB and I think you're right that there are some unexpected or buggy behaviors with the "last value" mode. It is a bug to show zero instead of null. The idea of showing all series names—even ones with null values—seems to be working as intended but I think we could open an issue to track requests for this.

To clarify the second point, TSVB is entirely based on the first view of the "Time Series". Compare the time series to the other types and you'll see that we are showing the same series, but different values. So that's why I say the zero values are a bug, but the series display is not a bug.

Since TSVB does have this bug, your only workaround is to use the Gauge type that's not part of TSVB. You can take advantage of the per-panel time range in your dashboard, which will let you compare multiple time periods side by side.

The per-panel time range feature is also why I don't ever recommend the "last value" mode of TSVB. It causes problems for our users.

Thank you for that acknowledgment, and for taking the time to discuss this topic.

Yes, I agree that would be a good idea. I am happy to open that issue myself.

Unless you object (I will wait until Friday 10 AM UTC+8 for your reply), I will open an issue in the elastic/kibana repository on GitHub. In that issue, I plan to copy selected excerpts from this forum topic, and also link to this topic.

I am hoping that opening an issue will do more than "track requests"; I am hoping it will prompt code changes by the Elastic dev team that correct this behavior; or at least, offer an option to correct this behavior.

Yes. Earlier in this topic, I reported the following related observation:

Just for completeness, I thought it was worth mentioning that, understandably (given the observations in this thread), all of the TSVB visualizations, including Time Series, exhibit the same inconsistency that I reported for the gauges: with the data filter field1 < 14400, they show Bravo, but not Alpha. A consistent inconsistency :wink:.

I accept that the series display is your (the Elastic dev team's) choice: the result of a deliberate decision.

Yes, I've started looking at that. I'm having trouble reproducing some of the features (such as Numeral.js '00:00:00' formatting) that are available in TSVB (I need to refamiliarize myself with the JSON input field). But I'll leave that to other topics, and perhaps to the original developer.

Thanks again for your time on this topic. I was initially mystified by the behavior that I was observing. You've been a big help in clarifying what's going on here, and I sincerely appreciate your acknowledgment that there is, indeed, a bug.

Since yesterday I have opened some issues and a pull request for the clear bug- turns out it's a small code change.

Bug issue: [TSVB] When using "last value" mode with terms aggregation, null data is replaced by zeroes · Issue #90220 · elastic/kibana · GitHub
Bug fix PR: [TSVB] Stop inserting 0 for series that should be null by wylieconlon · Pull Request #90224 · elastic/kibana · GitHub

Improvement of last value mode and series handling: [TSVB] Show an indicator when using Last Value mode · Issue #90248 · elastic/kibana · GitHub

Thanks for opening those issues.

Before moving on from this topic, I want to revisit the use case that prompted it; in particular, regarding this:

I deliberately deferred responding in detail to this until now, because I did not want the specific—perhaps, idiosyncratic—details of the use case to deflect attention from the identified bug.

Returning to the original use case...

The description of the field CPCHRMSU is "Time until capping"; or, depending on which doc you read, "Remaining time until capping".

What's "capping"? Here, it's sufficient to know that capping is system behavior that is typically undesirable. It's useful to know when systems are at risk of capping.

However, the description "[Remaining] time until capping" is simplistic. That description is accurate for most, but not all, values of CPCHRMSU.

Capping is based on a rolling 4-hour period. The maximum value of CPCHRMSU, 14400 (in seconds; meaning, 4 hours), does not mean "capping will start in 4 hours". Rather, it means "there is currently no risk of capping". It might also mean, although I'm less certain of this, "an actual value cannot be determined".

The value 0 means "already capped"; although—again, I'm less certain of this—under some specific circumstances, depending on the value of other fields, it might also have another meaning. As you can tell, I'm not a subject-matter expert here.

The intent of the original visualization is to show systems that are at risk of capping.

The original visualization had no data filter. It showed gauges with the label/value combination "Time until capping 4:00:00" (I'd used the Numeral.js syntax '00:00:00' to format the displayed value).

As already discussed, that label/value combination is bogus, misleading. Furthermore, in the context of this "at risk" viz, those gauges are noise, because, with that value, those systems are not at risk of capping. Systems with that value should not appear in the viz.

That is why I applied the data filter CPCHRMSU < 14400, triggering the bug described earlier in this topic. At least for the data I was visualizing (which was much richer than the "3-item" example data I posted to this topic), most systems were not at risk of capping. The "at risk" gauges were swamped by all the "Time until capping 4:00:00" noise.

Minor point: I acknowledge that I can use color to distinguish betweeen "at risk" and "not at risk" systems. For example, I can display gauges with the value 14400 (4 hours) in green, and other values in yellow, or red. Still, the user must ignore a sea of green gauges.

I accept that you have deliberately decided to do this.

However, I want to point out that, for this use case, that decision has two undesirable effects:

  • Noise: in a visualization intended to show only "at risk" systems, that decision means also showing systems that are not at risk
  • A false statement: "Time until capping 4:00:00"

One might argue, "Okay, so there's noise. But those gauges are green, and informed users—subject-matter experts who understand the details of the underlying calculation—will understand that 'Time until capping 4:00:00' doesn't really mean that."

I make the following general statement, which I think has broad applicability beyond this topic: ironic humor aside, making a statement that is literally false—with the excuse that readers or listeners will understand that statement to be literally false, and instead understand it to mean something else—is perilous.

I'd appreciate your thoughts and any new suggestions on this.

Meanwhile, following your existing suggestion, I've started using non-TSVB gauges, with a "Top Hit" aggregation (which, counterintuitively, and please correct me if I'm wrong, appears to correspond to the TSVB "Last value" option) with a CPCHRMSU < 14400 filter "baked-in" to the visualization. That filter omits the noise.

(I've contacted the original viz developer: they don't need to drop the last bucket.)

One unfortunate consequence of "omitting the noise", though—and here, you might accuse me of wanting to have my cake and eat it, too: in TSVB, the extent of (angle covered by) the gauge arc was relative to 14400, whereas, in the non-TSVB gauges, those arcs are relative to the largest value in the displayed results: I'd like to make them relative to the maximum value of 14400. I wish I knew how to do that without re-introducing the "noise".