Kibana suddenly rounding number field to single digit

Hi All,

Apologies in advance as this is likely to be a stupid error on my part but it's got me stumped at the moment.

I'm running Elasticsearch and Kibana 7.12 and for some months have been using Logstash to input logging data. I use the Elapsed plugin in Logstash to populate a field on certain records with the latency of processing across a component - basically it calculates:
(end processing time - start processing time) = elapsedtime

This field (elapsedtime) is defined to 3 decimal places in order to show milliseconds. The field definition is set as:
0,0.[000]

This has all been working fine, and then for some reason, from 00:00:00 on 7th June 2021 (01:00:00BST) my data in Kibana started displaying the elapsedtime field rounded/shortened to just a single digit.
This is shown in the screenshot below. Notice the two records immediately prior to 01:00:00 display the value for elapsedtime correctly (as I'd expect) whilst those after it do not.
image

What is confusing me is if I look at the underlying Elasticsearch documents for these IDs, they contain the full-precision values for elapsedtime. For example, Kibana is displaying document ID 5HZkD3oBCcqIWceCDTch as having an elapsedtime value of 3 (see screenshot above), whilst Elasticsearch shows that the underlying document contains the correct value of 3.039 as per the screenshot below:

Previously, Kibana would have been displaying the value 3.039, not 3.

Can anyone explain what might be happening here. I have not changed the index/field definitions recently, or changed the Elasticsearch, Kibana or Logstash processing from what I recall.

Thanks in advance,
Steve

This is very strange because you are seeing decimal places on other values for the same field. I have some questions:

  1. What is the value of your Kibana advanced setting discover:searchFieldsFromSource?

  2. For the sample document that you just shared, can you please expand the document from Discover- it looks like your screenshot is from the single-doc view, not from the expanded view. There might be a difference.

  3. Would you mind showing what your index pattern field formatter is for the elapsedtime field?

  4. What is the value of your Kibana advanced setting format:number:defaultPattern?

Hi Wylie,

Glad it's not just me confused...! Thanks for the quick follow up too...!

Below is the setting for discover::searchFieldsFromSource

A portion of the expanded document in Kibana that shows elapsedtime as '3' in Kibana, rather than the expected '3.039' is shown below. In the expanded view it also shows in Kibana as '3' so this isn't an artifact of the line-view.

The index pattern for my elapsedtime field is as follows:

and finally the setting for format:number:defaultPattern is:

I'm not sure I see anything obviously wrong with any of these....and this seems to be working fine on records that are prior that timestamp... :confused:

All help appreciated!

Regards,
Steve

The issue is not in Kibana, the issue is happening in Elasticsearch. I believe your mapping for the elapsedtime field in Elasticsearch treats this as an integer field, not a long field. Starting in 7.12, the default behavior is to show what Elasticsearch indexed, not the value of _source. These values can be different, and I will demonstrate how.

First, create an index with both an integer and double mapping:

PUT test-index
{
  "mappings": {
    "properties": {
      "elapsed": {
        "type": "integer",
        "fields": {
          "double": {
            "type": "double"
          }
        }
      }
    }
  }
}

Then, add a new document with a double value:

PUT test-index/_doc/1?refresh=true
{
  "elapsed": 3.032
}

Then search both the unmodified _source and the values indexed by Elasticsearch:

POST test-index/_search
{
  "fields": ["*"]
}

This produces:

    "hits" : [
      {
        "_index" : "test-index",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "elapsed" : 3.032
        },
        "fields" : {
          "elapsed" : [
            3
          ],
          "elapsed.double" : [
            3.032
          ]
        }
      }
    ]

Notice that the elapsed.double contains the correct value, but elapsed contains the integer value. This appears to be what's happening to you.

1 Like

Hi Wylie,

That looks like it could be the problem. I had thought I'd moved to 7.12 a little while ago (and processed log events into Elastic without seeing this issue) but I guess it's possible that this is the first time I've imported new log records since upgrading from 7.11 to 7.12.

Could you recommend the most straightforward solution for me to get this sorted, or point me at the documentation that will help me figure it out

As usual, thanks very much for the really quick and informative respond.

Regards,
Steve

| wylie Wylie Conlon Elastic Team Member
June 15 |

  • | - |

The issue is not in Kibana, the issue is happening in Elasticsearch. I believe your mapping for the elapsedtime field in Elasticsearch treats this as an integer field, not a long field. Starting in 7.12, the default behavior is to show what Elasticsearch indexed, not the value of _source. These values can be different, and I will demonstrate how.

First, create an index with both an integer and double mapping:

PUT test-index
{
  "mappings": {
    "properties": {
      "elapsed": {
        "type": "integer",
        "fields": {
          "double": {
            "type": "double"
          }
        }
      }
    }
  }
}

Then, add a new document with a double value:

PUT test-index/_doc/1?refresh=true
{
  "elapsed": 3.032
}

Then search both the unmodified _source and the values indexed by Elasticsearch:

POST test-index/_search
{
  "fields": ["*"]
}

This produces:

    "hits" : [
      {
        "_index" : "test-index",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "elapsed" : 3.032
        },
        "fields" : {
          "elapsed" : [
            3
          ],
          "elapsed.double" : [
            3.032
          ]
        }
      }
    ]

Notice that the elapsed.double contains the correct value, but elapsed contains the integer value. This appears to be what's happening to you.

I think the solution depends on what you want, which I think are two possible things:

Do you need the accurate value of elapsed for any of your aggregations or filters? If so, you would need to reindex your logstash data into a duplicate index where you've fixed the mappings, as there is no way to change the mapping dynamically. There is also a much less performant option if you want, which is to add a runtime mapping that shadows the elapsed field you have with a new data type- but reindexing is much better performance.

If you don't need the value of elapsed for any aggregations or filtering, you just want to see it in Discover, you can enable discover:searchFieldsFromSource to see the raw _source value.

Thanks Wylie,

I'll take a look into this tomorrow as I do use this field for filtering etc so it sounds like reindexing if the way to go.

Thanks,
Steve

| wylie Wylie Conlon Elastic Team Member
June 15 |

  • | - |

I think the solution depends on what you want, which I think are two possible things:

Do you need the accurate value of elapsed for any of your aggregations or filters? If so, you would need to reindex your logstash data into a duplicate index where you've fixed the mappings, as there is no way to change the mapping dynamically. There is also a much less performant option if you want, which is to add a runtime mapping that shadows the elapsed field you have with a new data type- but reindexing is much better performance.

If you don't need the value of elapsed for any aggregations or filtering, you just want to see it in Discover, you can enable discover:searchFieldsFromSource to see the raw _source value.

Hi Wylie,

Another quick question on this, not having reindexed previously.

I had a look at the 7.13 Reindex docs - in particular this section:
(Reindex API | Elasticsearch Guide [7.13] | Elastic

Is this the correct approach to take in my case - modify my current template to make elapsedtime a double rather then integer, and then run the reindex to convert all existing documents?

This seems to create indexes with different names to the originals, which I'm pretty sure will then break my existing Kibana visualisations and searches. Would I need to perform a second reindex run 'back again' to restore the index names to their original values or is there a better solution (I suspect there is).

Thanks,
Steve

Yes, please be careful and take a snapshot before doing anything like a reindex. The general strategy shown there is accurate, which is:

  1. Update the template
  2. Identify all the indices that you need to migrate
  3. Reindex from an index like logstash-1 to logstash-1-reindexed. If you are doing an async reindex, make sure to check the task output to verify no errors.

Once you successfully reindex, you can delete the previous index and because of the wildcard, your previous index patterns would match the new data.

There is a slightly different strategy that you could consider if the field name isn't important: You could add a new mapping to your existing index, you just can't modify the mappings once they're created.

Hi Wylie,

Apologies for this but I'm a little lost in the process of updating the 'template' as I'm not entirely sure where or what the template is...

I have a Kibana Index Pattern for the log events I run into elasticsearch and then query and visualise in Kibana. This index pattern contains entries for each of the fields - including the pesky 'elapsedtime' number field. See screenshot below:

However, I'm not able to update this Index Pattern, and from your help yesterday I think the actual index I need to update (somehow) is within Elasticsearch rather than being this one in Kibana?

Looking within Data/Index Management/Index Templates I can see a 'Legacy index template' called "logstash" with an index pattern of "logstash-*". This would match against my created indexes which are named according to the pattern:
logstash-clef-prd-<year>.<week#>

I can edit this template, however when I get to the 'Mapped fields' section (where I was hoping I might see all my data fields that I can search and filter on in Kibana) all I see is three fields:

There is nothing under 'Runtime fields' or 'Dynamic templates'.

Could you point me in the right direction as to which 'index template' I need to update, and how to actually update it. I've checked through the documentation and haven't been able to figure it out so far.

For information, my data is being run into Elasticsearch via Logstash using the JDBC input. I'm not sure if that has any bearing on how Elasticsearch creates the index/record structure.

On a positive note, I've managed to create a snapshot so it's not all bad news.... :slight_smile:

Thanks,
Steve

It looks like your legacy template should be updated since you are having mapping problems with logstash-*. Since you don't have a mapping or dynamic template for the elapsedtime field, Elasticsearch appears to be picking the mapping for you. What does the mapping look like on GET logstash-clef-prd-2021.23/mapping? Is your logstash creating mappings by configuration, instead of by template?

Hi Wylie,

Sounds like this is a bit more complicated... :frowning:

Below is the result of:
GET logstash-clef-prd-2021.22/_mapping?

I've shortened this quite a bit (there are normally 30 or so more entries under the 'properties' section) but hopefully this still shows you what you need to see.

{
  "logstash-clef-prd-2021.22" : {
    "mappings" : {
      "dynamic_templates" : [
        {
          "message_field" : {
            "path_match" : "message",
            "match_mapping_type" : "string",
            "mapping" : {
              "norms" : false,
              "type" : "text"
            }
          }
        },
        {
          "string_fields" : {
            "match" : "*",
            "match_mapping_type" : "string",
            "mapping" : {
              "fields" : {
                "keyword" : {
                  "ignore_above" : 256,
                  "type" : "keyword"
                }
              },
              "norms" : false,
              "type" : "text"
            }
          }
        }
      ],
      "properties" : {
        "@timestamp" : {
          "type" : "date"
        },
        "@version" : {
          "type" : "keyword"
        },
        "applicationname" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          },
          "norms" : false
        },
        "data" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          },
          "norms" : false
        },
        "elapsedtime" : {
          "type" : "float"
        },
        "geoip" : {
          "dynamic" : "true",
          "properties" : {
            "ip" : {
              "type" : "ip"
            },
            "latitude" : {
              "type" : "half_float"
            },
            "location" : {
              "type" : "geo_point"
            },
            "longitude" : {
              "type" : "half_float"
            }
          }
        },
        "id" : {
          "type" : "long"
        },
        "msg" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          },
          "norms" : false
        },
        "targetsystem" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          },
          "norms" : false
        }
      }
    }
  }
}

Apologies but with regards to the Logstash question and whether its creating mappings by configuration or template I'm afraid I'm not clear on that either.

I can post my logstash configuration file if that helps, but basically it uses the JDBC input to read from a DB, has some filters to set a few tags on events, and runs the 'elapsed' plugin, and then uses the elasticsearch output to send the data to elasticsearch. I don't reference any elasticsearch templates anywhere within the Logstash config if that's what you mean. The 'output' section of my Logstash config simply looks as follows:

output {
	elasticsearch {
		# Create indexes named by Year and ISO week number.
		# Avoids the cost of large numbers of daily indexes.
		index => "logstash-clef-prd-%{+xxxx.ww}"
		hosts => ["http://lt8649:9200"]
	}

Thanks for all the assistance so far, hopefully the above helps to clarify my setup.

Regards,
Steve

Hi Wylie,

Just to update - you were correct, my legacy template did not provide an explicit mapping for this field and this resulted in an issue when we upgraded to Elasticsearch 7.12.

I've now added the mapping to the legacy template, reindexed all my existing data and everything appears to be working fine again.

thanks for your help.
Steve

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.