Search results show field values that search does not find

Sounds nonsensical, but I can give an example and explain why it happens.

Example

Suppose that both of the following conditions are true:

  • On the Kibana Discover page, a table of search results shows the value 3.142 in the column for a field named myfield.
  • On the Kibana Management / Kibana / Indices / Index Pattern page, the Fields tab shows that the field myfield has the type number (and is searchable and aggregatable).

Under these conditions, you might expect that entering myfield:>3 in the Discover search bar would find the document with the myfield value 3.142.

But it doesn’t.

Why?

If you do not define a mapping for a floating-point field, and the first value for that field that you forward to an index is an integer—a value without a decimal point, such as 0—then, by default, Elasticsearch dynamically maps that field to the long (long integer) type.

If you subsequently forward a floating-point value for that field (such as 3.14159), Elasticsearch sets the
doc value to the truncated integer value (3). And it’s the doc value that Elasticsearch (and, hence, Kibana) uses for searching.

However, Kibana search results show field values from the _source field, not the doc value.

That’s why the results of myfield:>3 don’t include the document that Kibana showed with the myfield column value of 3.142 (the _source value, rounded to the precision specified by the default pattern for the number format). 3 is not greater than 3.

What’s my point?

This happened to me. Don’t let it happen to you :slight_smile: .

The bleeding obvious: one way to stop this happening is to define index templates with field mapping before you forward data. That way, you don’t rely on Elasticsearch to detect whether a numeric field is a float or a long based on the presence or absence of a decimal point in the first field value.

Another way, if you use Logstash, and if your logs contain some integer values (numbers without decimal points) for floating-point fields, is to use the convert setting (for example, in the csv filter) to convert the field to a float. That appends a trailing .0 to any such values, ensuring that Elasticsearch correctly maps the field to float (or double, if you’re still using Elasticsearch 2).

Otherwise, if you are responsible for creating the original log data: think twice before writing integer values of floating-point fields as integers, just because it’s more concise.

Experienced Elastic users: one reason I’m bothering writing this topic is to confirm that I’ve understood what’s happening here. Please correct me if I’m wrong about any of this.

Related topics

For background information on this topic, see the Elasticsearch discussion topic “Storing a floating-point value in a long field: what, no mapping error?”, with thanks to @cbuescher for the very helpful replies.

Thanks GrahamHannington for the helpful posts!

For a simple example, in Kibana Console I can create a new index and check the mapping;

POST new-index/mytype
{
  "field1": 100
}

GET new-index/mytype/_mapping

And the result of the _mapping is;

{
  "new-index": {
    "mappings": {
      "mytype": {
        "properties": {
          "field1": {
            "type": "long"
          }
        }
      }
    }
  }
}

If I post a new doc with a floating point value for that field I get the same results as Graham describes. I can see the _source value in Kibana discover tab, but if I search field1:>100 I get no results;

POST new-index/mytype
{
  "field1": 100.1
}

But if the first doc I post had the value 100.1 then Elasticsearch dynamically sets the mapping for field1 to float and future docs which might look like an integer/long are coerced to float.
(docs on coerce settings)

Regards,
Lee

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.