TypeError: xScale.rangeBand is not a function

Hi Everybody!,

I am trying to make a date histogram in the next scenario

  • Kibana 4.1.1
  • Create a date histogram where the time field(myDate) is not the same that the index time field (@timestamp).
  • Split Chart, Filter Aggregation by a timestamp range like @timestamp:[2015-07-20T05:00:00 TO 2015-07-20T19:00:00].
  • Nothing is shown in the visualization and in the console:

TypeError: xScale.rangeBand is not a function
at SVGRectElement. (:4601/index.js?_b=7489:136242)
at SVGRectElement.attrFunction (:4601/index.js?_b=7489:122485)
at :4601/index.js?_b=7489:122769
at d3_selection_each (:4601/index.js?_b=7489:122775)
at Array.d3_selectionPrototype.each (:4601/index.js?_b=7489:122768)
at Array.d3_selectionPrototype.attr (:4601/index.js?_b=7489:122468)
at ColumnChartFactory.ColumnChart.addStackedBars (:4601/index.js?_b=7489:136241)
at ColumnChartFactory.ColumnChart.updateBars (:4601/index.js?_b=7489:136209)
at ColumnChartFactory.ColumnChart.addBars (:4601/index.js?_b=7489:136185)
at HTMLDivElement. (:4601/index.js?_b=7489:136408)

After do this, I noticed this still happens no matter which sub-aggregation I choose in split chart section.(Filter, Terms, Date Range)

Does anybody could advise me about this?

Many thanks in advance.

@Giovanii_Mirko_Terra is networkDate a date field or is it a string? Could you send a screenshot of your settings screen where we can see the field type for networkDate. I've attached a screenshot below as an example.

Many thanks for the response @stormpython!, yes it is a date field. I also noticed that this problem is not reproduced when I create an index with networkDate as time-field. This just happens if I use timestamp as time-field and create a datehistogram with other field different than timestamp in x axis.

Zoom :slight_smile: :

@Giovanii_Mirko_Terra I can't seem to reproduce what you've described. Could you grab the latest version of Kibana and see if you still are seeing the same issue.

I have several date fields in my indices (besides @timestamp) and they still behave as expected. The error that is appearing is occurring because for whatever reason, the data that is being returned does not have an ordered.date key which lets the vis library know that we are dealing with a time series data set. Instead, it is expecting the chart to be categorial.

Interesting, I see the same error with kibana 4.2.0 and even if I try with github code kibana 4.3.0-snapshot, I notice this happens with every date field that I use (even timestamp). The scenario is the same use different time fields in index time-field and x axis (Datehistogram/split-chart). So I think is something with my environment, Could it be something related to my mappings?? I was not completely sure about this and just copy the elasticsearch-template.json in logstash to config/templates in elasticsearch something like this:

{
"my-template":{
  "template" : "logstash*",
  "settings" : {
    "index.refresh_interval" : "5s"
  },
  "mappings" : {
    "_default_" : {
       "_all" : {"enabled" : true, "omit_norms" : true},
       "dynamic_templates" : [ {
         "message_field" : {
           "match" : "message",
           "match_mapping_type" : "string",
           "mapping" : {
             "type" : "string", "index" : "analyzed", "omit_norms" : true
           }
         }
       }, {
         "string_fields" : {
           "match" : "*",
           "match_mapping_type" : "string",
           "mapping" : {
             "type" : "string", "index" : "analyzed", "omit_norms" : true,
               "fields" : {
                 "raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256}
               }
           }
         }
       } ],
       "properties" : {
         "@version": { "type": "string", "index": "not_analyzed" },
         "networkDate": { "type": "date", "index": "not_analyzed" }             
         "geoip"  : {
           "type" : "object",
             "dynamic": true,
             "properties" : {
               "location" : { "type" : "geo_point" }
             }
         }
       }
    }
  }
}
}

Still reviewing let you know if something comes up, thanks @stormpython!

I see what you mean, in kibana 4.1.1, if I print my data.ordered it does not have min, max fields

success scenario: Object {date: true, interval: Duration, min: Moment, max: Moment}

error scenario: Object {date: true, interval: Duration, endzones: false}

After debugging a little bit more I noticed there is a _normalizeOrdered method that is in charge to ensure min/max values to ordered object when they don't come directly from the timepicker.... (in my case when I use a different time field and X axis field).

This works perfect when data object is just one:

However when I split the chart this data object is an object with rows:

and since data.ordered is actually stored in data.rows[0].ordered; data.rows[1].ordered.... min/max values are not set by _normalizeOrdered method, causing this error.

I think I am missing something since you don't see this error, but not sure what, hope this info can help to reproduce this.

@Giovanii_Mirko_Terra Thanks for stepping thru the code to show where the error is coming from. I will try again to reproduce and debug the issue.

Would you mind filing an issue in the Kibana github repo (https://github.com/elastic/kibana/issues) so that the team can track the progress of this for you? You simply need to copy the last section you wrote and paste it into the issue box along with giving it a descriptive statement. If you like, you could even reference this discuss ticket.

Thanks!

@Giovanii_Mirko_Terra

I went ahead and opened the issue, you can find it here: https://github.com/elastic/kibana/issues/5323.

I also went ahead and submitted a pull request to fix the issue which you can find here: https://github.com/elastic/kibana/pull/5325.

Thanks @stormpython!, I was not completely sure if it was a bug, but the pull request fixed my problem. Were you actually able to reproduce it?. I'll be pending of the pull request

@Giovanii_Mirko_Terra Yes, I was able to reproduce the bug. As soon as we get it merged, I will back port it to the 4.1 and 4.2 branches.