Sounds nonsensical, but I can give an example and explain why it happens.
Suppose that both of the following conditions are true:
- On the Kibana Discover page, a table of search results shows the value
3.142in the column for a field named
- On the Kibana Management / Kibana / Indices / Index Pattern page, the Fields tab shows that the field
myfieldhas the type
number(and is searchable and aggregatable).
Under these conditions, you might expect that entering
myfield:>3 in the Discover search bar would find the document with the
But it doesn’t.
If you do not define a mapping for a floating-point field, and the first value for that field that you forward to an index is an integer—a value without a decimal point, such as
0—then, by default, Elasticsearch dynamically maps that field to the
long (long integer) type.
If you subsequently forward a floating-point value for that field (such as
3.14159), Elasticsearch sets the
doc value to the truncated integer value (
3). And it’s the doc value that Elasticsearch (and, hence, Kibana) uses for searching.
However, Kibana search results show field values from the
_source field, not the doc value.
That’s why the results of
myfield:>3 don’t include the document that Kibana showed with the
myfield column value of
_source value, rounded to the precision specified by the default pattern for the number format). 3 is not greater than 3.
What’s my point?
This happened to me. Don’t let it happen to you .
The bleeding obvious: one way to stop this happening is to define index templates with field mapping before you forward data. That way, you don’t rely on Elasticsearch to detect whether a numeric field is a
float or a
long based on the presence or absence of a decimal point in the first field value.
Another way, if you use Logstash, and if your logs contain some integer values (numbers without decimal points) for floating-point fields, is to use the
convert setting (for example, in the
csv filter) to convert the field to a
float. That appends a trailing
.0 to any such values, ensuring that Elasticsearch correctly maps the field to
double, if you’re still using Elasticsearch 2).
Otherwise, if you are responsible for creating the original log data: think twice before writing integer values of floating-point fields as integers, just because it’s more concise.
Experienced Elastic users: one reason I’m bothering writing this topic is to confirm that I’ve understood what’s happening here. Please correct me if I’m wrong about any of this.
For background information on this topic, see the Elasticsearch discussion topic “Storing a floating-point value in a long field: what, no mapping error?”, with thanks to @cbuescher for the very helpful replies.