Kibana 6.2.3: unexpected data in line chart

Hi,

I want to display some values in a line chart, which are in the range between 0.00 to 1.00 (percentages).

When filtering in discover panel, I see the following values:
image

here is the json part as example

...
 "system": {
      "process": {
        "ppid": 5228,
        "detailedId": "txs_statistics.exe SYSTEM 5088",
        "pgid": 0,
        "state": "running",
        "username": "NT AUTHORITY\\SYSTEM",
        "name": "txs_statistics.exe",
        "cmdline": "txs_statistics -C dom=amesprod -g 9 -i 105 -u LOGIPRODTUX11 -U G:\\amest\\LogFile\\ULOG\\ULOG -m 0 -A -- g:\\amest\\logfile\\statistics",
        "pid": 5088,
        "cpu": {
          "total": {
            "pct": 0.0684
          },
          "start_time": "2018-04-04T02:30:06.978Z"
        },
        "memory": {
          "size": 2539520,
          "share": 0,
          "rss": {
            "bytes": 18948096,
            "pct": 0.0003
          }
        }
      }
    },
...

I want to display max system.process.cpu.total.pct

When I build a visualization all values are truncated to 0.

But response from ES is giving 0 value:

{
  "took": 21,
  "timed_out": false,
  "_shards": {
    "total": 86,
    "successful": 86,
    "skipped": 79,
    "failed": 0
  },
  "hits": {
    "total": 10,
    "max_score": 0,
    "hits": []
  },
  "aggregations": {
    "2": {
      "buckets": [
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2018-04-10T08:20:00.000+02:00",
          "key": 1523341200000,
          "doc_count": 1
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2018-04-10T08:21:00.000+02:00",
          "key": 1523341260000,
          "doc_count": 1
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2018-04-10T08:22:00.000+02:00",
          "key": 1523341320000,
          "doc_count": 1
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2018-04-10T08:23:00.000+02:00",
          "key": 1523341380000,
          "doc_count": 1
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2018-04-10T08:24:00.000+02:00",
          "key": 1523341440000,
          "doc_count": 1
        },
...

strange thing is, when I change the field to another field holding percentages, everything is fine:

...
  "aggregations": {
    "2": {
      "buckets": [
        {
          "1": {
            "value": 0.00989999994635582
          },
          "key_as_string": "2018-04-10T08:20:00.000+02:00",
          "key": 1523341200000,
          "doc_count": 302
        },
        {
          "1": {
            "value": 0.00989999994635582
          },
          "key_as_string": "2018-04-10T08:21:00.000+02:00",
          "key": 1523341260000,
          "doc_count": 302
        },
        {
          "1": {
            "value": 0.00989999994635582
          },
          "key_as_string": "2018-04-10T08:22:00.000+02:00",
          "key": 1523341320000,
          "doc_count": 302
        },

What is the problem? How can I fix it?

think I found the issue now while adding more data to this issue, but how do I fix it?

mapping is different between the two fields.

"memory": {
	"properties": {
	  "rss": {
		"properties": {
		  "bytes": {
			"type": "long"
		  },
		  "pct": {
			"type": "float"
		  }
		}
	  },
	  
	  
	  
	  
	  
"cpu": {
	"properties": {
	  "start_time": {
		"type": "date"
	  },
	  "total": {
		"properties": {
		  "pct": {
			"type": "long"
		  }
		}
	  }
	}
  },

But how do I fix it?
Define a index mapping template? currently I am using dynamic mapping.
When I defined a mapping template, what will happen if a new field is shipped by logstash? will it be added to the index?

For old data, do I need to reindex?

Hey there, yep it looks like that dynamic mapping is the issue. Please take a look at the Elasticsearch mapping docs which recommends creating a new index and specifying a new mapping, and then reindexing your data into that index.

Hope this helps,
CJ