Cannot read property 'length' of undefined ... bundles/vendors.bundle.js?v=16602:17

Hi,

since a few days we are on elastic stack 6.2.3 on production.
It didn't occour in our testing phase, but now we are facing the folloing issue in kibana.

Uncaught TypeError: Cannot read property 'length' of undefined (https://myurl/kibana/bundles/vendors.bundle.js?v=16602:17)
Version: 6.2.3
Build: 16602
Error: Uncaught TypeError: Cannot read property 'length' of undefined (https://devtrunk.logiweb.de/kibana-TSI/bundles/vendors.bundle.js?v=16602:17)
    at window.onerror (https://myurl/kibana/bundles/commons.bundle.js?v=16602:21:467700)

colleges noticed it using chrome, I had these issues in vivaldi (also chrome based).
Internet explorer does not work (too long address lines).
Interestingly it seems not to occur on all users at the same time, when even when they are using the same dashboard with same time interval. But I am not 100% sure about it.

What logs may help you to track the error down?

Thanks, Andreas

Hey,

in the dev console (F12 or Cmd+Alt+I on Mac) of the browser should be a longer stack trace, that has also the causing lines in there.

Could you please provide that information and details on the following question:

  • For those people who have this error: is it always occurring or only sometimes?
  • Can you give some information on what type of visualizations are on that said dashboard?

Cheers,
Tim

commons.bundle.js?v=16602:1 Error: Uncaught TypeError: Cannot read property 'length' of undefined (https://devtrunk.logiweb.de/kibana-TSI/bundles/vendors.bundle.js?v=16602:17)
    at window.onerror (commons.bundle.js?v=16602:21)
Notifier._showFatal @ commons.bundle.js?v=16602:1
Notifier.fatal @ commons.bundle.js?v=16602:1
wrapper @ vendors.bundle.js?v=16602:88
window.onerror @ commons.bundle.js?v=16602:21
vendors.bundle.js?v=16602:17 Uncaught TypeError: Cannot read property 'length' of undefined
    at formatNumber (vendors.bundle.js?v=16602:17)
    at formatNumeral (vendors.bundle.js?v=16602:17)
    at Numeral.format (vendors.bundle.js?v=16602:17)
    at NumeralFormat.value (commons.bundle.js?v=16602:16)
    at recurse (commons.bundle.js?v=16602:21)
    at addDetail (commons.bundle.js?v=16602:45)
    at Tooltip.formatter (commons.bundle.js?v=16602:45)
    at Binder.<anonymous> (commons.bundle.js?v=16602:45)
    at SVGCircleElement.<anonymous> (commons.bundle.js?v=16602:45)
    at SVGCircleElement.dispatch (vendors.bundle.js?v=16602:111)
    at SVGCircleElement.elemData.handle (vendors.bundle.js?v=16602:111)
formatNumber @ vendors.bundle.js?v=16602:17
formatNumeral @ vendors.bundle.js?v=16602:17
format @ vendors.bundle.js?v=16602:17
value @ commons.bundle.js?v=16602:16
recurse @ commons.bundle.js?v=16602:21
addDetail @ commons.bundle.js?v=16602:45
(anonymous) @ commons.bundle.js?v=16602:45
(anonymous) @ commons.bundle.js?v=16602:45
(anonymous) @ commons.bundle.js?v=16602:45
dispatch @ vendors.bundle.js?v=16602:111
elemData.handle @ vendors.bundle.js?v=16602:111
commons.bundle.js?v=16602:21 Uncaught Error: Uncaught TypeError: Cannot read property 'length' of undefined (https://devtrunk.logiweb.de/kibana-TSI/bundles/vendors.bundle.js?v=16602:17)
    at window.onerror (commons.bundle.js?v=16602:21)
window.onerror @ commons.bundle.js?v=16602:21

thats what I see in developer console.

first load of page was ok. When autorefreshing the error is occuring at nearly 100%.
Before autorefreshing refreshing again by clicking in the magifying glass works ok.

We use the same dashboard for different stages. one stage is fine, another stage gives me now permanent error.

but then 1st loading is working quite every time, it should not be the underlying data... - i think

This is definitely a bug. I am not sure if we already track this, but could you please be so kind open an issue for that with the above stacktrace and all information you can provide.

I've got the feeling this has already been reported some time, but is very hard to reproduce. Looking into it, it seems to be related to NaN or Infinity values (or possibly also veeery small values, 1e-10). Perhaps you could also provide information on the kind of "data range" that you know your values should have?

Thanks a lot,
Tim

I think the following information would also be very useful:

  • Are there any custom field formatters (numeric) applied to any field in any of the used index patterns?
  • What is your format:number:defaultLocale setting in Management > Advanced Setting set to?

I will check this. very small values are possible (1e-10).
we have a long weekend here. I will fetch the data next wednesday.

No problem, I will just take the same long weekend and check back with you on Wednesday :slight_smile:

There are values like: 8.606505135116292e-22 within the shown diagrams.

The interesting thing is, that it does not crash each time on reloading.

I tried to find out which diagram is faulting, so I just pinned my filter and opened the visualisation via the edit button from dashboard.
On my first panel I got the same issue (sometimes / not on every (re-)load) of the page.

That chart is indeed visualizing very small numbers. When checking the time interval in discover panel, I got following interesting stuff:

json shows: 8.606505135116292e-22
kibana shows (red marked):

no value shown,

More Inconsistent it is, when I open the single document:

So metricsM15 is shown in table view, but in document view also (although it is not stored as exponential notation).

This is how it looks looks like in the mapping (shown in kibana)

this is how my number pattern are set:
image

In the index used, there are no scripted fields.

also very strange: I am not able to reproduce the screenshots above:

The following query in discover gives me following screen:

"(type.keyword: ttpgwy_metrics OR logType.keyword: ttpgwy_metrics) AND metricsName.keyword: received.rate AND metricsM1: 8.606505135116292e-22

I get the following:

discover panel is generating following query:

{
  "version": true,
  "size": 500,
  "sort": [
    {
      "@timestamp": {
        "order": "desc",
        "unmapped_type": "boolean"
      }
    }
  ],
  "_source": {
    "excludes": []
  },
  "aggs": {
    "2": {
      "date_histogram": {
        "field": "@timestamp",
        "interval": "1m",
        "time_zone": "Europe/Berlin",
        "min_doc_count": 1
      }
    }
  },
  "stored_fields": [
    "*"
  ],
  "script_fields": {
    "system.cpu.used.pct_scr": {
      "script": {
        "inline": "doc['system.cpu.system.pct'].value  + doc['system.cpu.user.pct'].value",
        "lang": "painless"
      }
    },
    "system.process.pidDetails": {
      "script": {
        "inline": "doc['system.process.name.keyword'].value  + ' ' + doc['system.process.username.keyword'].value + ' ' + doc['system.process.pid'].value",
        "lang": "painless"
      }
    }
  },
  "docvalue_fields": [
    "@timestamp",
    "logstash.processing.filterEnd",
    "logstash.processing.filterStart",
    "pidCreationTime",
    "system.process.cpu.start_time"
  ],
  "query": {
    "bool": {
      "must": [
        {
          "query_string": {
            "query": "(type.keyword: ttpgwy_metrics OR logType.keyword: ttpgwy_metrics) AND metricsName.keyword: received.rate AND metricsM1: 8.606505135116292e-22",
            "analyze_wildcard": true,
            "default_field": "*"
          }
        },
        {
          "match_phrase": {
            "stage": {
              "query": "PreProd"
            }
          }
        },
        {
          "range": {
            "@timestamp": {
              "gte": 1524829477544,
              "lte": 1524835558149,
              "format": "epoch_millis"
            }
          }
        }
      ],
      "filter": [],
      "should": [],
      "must_not": []
    }
  }
}

and this is the response:

{
  "took": 10,
  "hits": {
    "hits": [
      {
        "_index": "perf-staging-2018.04.27",
        "_type": "doc",
        "_id": "HHUAB2MB1NbPm5FDCar2",
        "_version": 1,
        "_score": null,
        "_source": {
          "logstash": {
            "processing": {
              "filterStart": "2018-04-27T12:07:13.138Z",
              "filterEnd": "2018-04-27T12:07:13.141Z",
              "filterTime": 3
            }
          },
          "serverType": "map",
          "stage": "PreProd",
          "source": "G:\\TTP-Gateway5645_IPVPN_TBM2\\metrics\\ttpgw-metrics.log",
          "offset": 847169,
          "metricsMeanRate": 0.07030142025283972,
          "metricsType": "METER",
          "metricsM5": 0.00005852577967023545,
          "@timestamp": "2018-04-27T12:07:08.340Z",
          "application": "ttpgwy_5645_TBM_2",
          "@version": "1",
          "hostName": "logippmap",
          "beat": {
            "version": "6.2.3",
            "name": "LOGIPPMAP",
            "hostname": "LOGIPPMAP"
          },
          "metricsM1": 8.606505135116292e-22,
          "metricsM15": 0.021910089437787685,
          "metricsName": "received.rate",
          "logType": "ttpgwy_metrics",
          "metricsCount": 504
        },
        "fields": {
          "logstash.processing.filterStart": [
            "2018-04-27T12:07:13.138Z"
          ],
          "logstash.processing.filterEnd": [
            "2018-04-27T12:07:13.141Z"
          ],
          "@timestamp": [
            "2018-04-27T12:07:08.340Z"
          ],
          "system.cpu.used.pct_scr": [
            0
          ],
          "system.process.pidDetails": [
            "null null 0"
          ]
        },
        "sort": [
          1524830828340
        ]
      }
    ],
    "total": 1,
    "max_score": 0
  },
  "aggregations": {
    "2": {
      "buckets": [
        {
          "key_as_string": "2018-04-27T14:07:00.000+02:00",
          "key": 1524830820000,
          "doc_count": 1
        }
      ]
    }
  }
}

may this lead to following bugfix?
https://www.elastic.co/guide/en/kibana/6.2/release-notes-6.2.4.html

For testing: is it save to run kibana 6.2.4 in parallel to 6.2.3 against an 6.2.3 backend? Or should I clone the kibana index to be sure?

Hi,

it should be safe, to run 6.2.4 in parallel, but of course if you want to be 100% safe, better make a backup before and/or clone the index. Yeah that PR really looks like it could be the bugfix for your issue. Please keep me updated, if 6.2.4 actually fixes your issue!

Cheers,
Tim

I tried to backup the kibana index as snapshot and restored it under a new name.
Got the issue the .kibana alias is pointing then to both indices (original and restored).
How can I remove the alias of the index on restore?

Thanks

I think that should work with the index alias API by just adding the same alias with a different index set.

the original issue is hard to reproduce. today I cannot reproduce it either on 6.2.3 nor on 6.2.4.
But the mentioned error in discover page is gone. In 6.2.4 the document is shown correctly.

I will update (only) kibana to 6.2.4 and we will see if the error comes up again within the next days.
As I understand the release notes of ES, it affects not my issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.