Problem depicting Stacked bar graph - not working


Hello everyone,

I hope somebody could help me. I'm trying to make an stacked plot in vega-lite. Everything appears to be fine but for some reason the graph does not allow one bar or (area) to be over the other. Instead of that, they are been depicted in the same origin in the x axis, and overlaped.

this is the code that i have in kibana.

$schema: ""

mark: area

data: {

url: {
  // Context == true means filters of the dashboard will be taken into account
  %context%: true
  // Specify on which field the time picker should operate
  %timefield%: @timestamp
  // Specify the index pattern to load data from
  index: replayerpattern6
  // This body will be send to Elasticsearch's _search endpoint
  // You can use everything the ES Query DSL supports here
 	 body: {
    	// Set the size to load 10000 documents
    	size: 10000,
	// Just ask for the fields we actually need for visualization
    	_source: ["@timestamp","@version","consumed","consumed_text1","consumed_text2","free","free_text1","free_text2","host", "logLevel", "logdate","max","max_text1","max_text2","message","values","values_names","date_time"]
format: { property: "hits.hits" }


transform: [

  // Convert _source.@timestamp field to a date
  calculate: "toDate(datum._source['@timestamp'])"
  as: "time"

]// Specify what data will be drawn on which axis
encoding: {
x: {

  field: _source.date_time
  type: temporal
  // Hide the axis label for the x-axis
  "axis": {
    "domain": false
    "tickSize": 0
    title: false
y: {
  // Draw the bytes of each document on the y-axis
  field: _source.values
  type: quantitative
  axis: { title: "Resource" }
  aggregate: sum

color: {
  field: _source.values_names
  type: nominal
 "scale": {
    "domain": [
    "range": [

  legend: { title: 'File type' }
shape: {
  field: ""
  type: nominal


"config": {}

and the data looks like this :slight_smile:

time values values_names
1 512 300
1 512 194
2 512 95
2 . .
3 . .
3 . .
4 . .
4 .

I haven't tested but according to the stack property shouldn't be the string "true", but an actual boolean value true or the string "zero"

Thanks for your reply flash1293, unfortunately after doing the modifications, it is still not working. Do you think it has relatioship with the vega-lite version of kibana that is v2.6.0. ?,
also, which other things do you think I coul check?

Thanks su much,

To make testing easier without access to your data, please follow the steps outlined here:

Flash1293, if i understand, i have to use the vega editor to try my data there?. I think it is exposing different results compare with the ones obtained in kibana, there is a possible solution for that?

Thanks so much,

How could I send you the file which contains the debugging that I did of my data?

Thanks so much for your help, Really this is really important for me.

No, this is specifically for Kibana, just follow the instructions. it will inline the data in the spec so I can try it on my own machine

Just copy/paste it here and put three backticks (```) in the empty line above and below it to format it nicely.

"{"$schema":"","width":300,"height":200,"mark":"area","data":{"format":{"property":"hits.hits"},"values":{"took":60,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":3906,"relation":"eq"},"max_score":1,"hits":[{"_index":"replayerpattern6","_type":"default","_id":"OOKjKnEBNvctf59-leY9","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.348Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:49,861 INFO main ICAPI: Default charset: UTF-8 (UTF-8)\r"}},{"_index":"replayerpattern6","_type":"default","_id":"OeKjKnEBNvctf59-leY9","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.352Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.hsoft.hmm-broker#hmm-encaps-remote;7.36.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"yeKjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.376Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-ref#ref-svc-imc;2.1.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"OuKjKnEBNvctf59-leY9","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.355Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.hsoft.hmm-imc#hmm-encaps-dict;7.16.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"yuKjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.377Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#bridge-soa-auth;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"y-KjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.377Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#bus-soa-clt;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"POKjKnEBNvctf59-leY9","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.360Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.hsoft.hsdk-core#hsdatamaster;7.54.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"zeKjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.378Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#fmk2-commons;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"zuKjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.379Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#fmk2-config;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"z-KjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.379Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#fmk2-core;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"QOKjKnEBNvctf59-leY9","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.372Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-imc#imc2-permissions;3.63.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"0eKjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.380Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#soa-api;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"QuKjKnEBNvctf59-leY9","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.390Z","host":"DESKTOP-OS7AALH","@version":"1","message":"\tfile.encoding=UTF-8\r"}},{"_index":"replayerpattern6","_type":"default","_id":"0-KjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.381Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#soa-auth-spi;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"1OKjKnEBNvctf59-leU7","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.382Z","host":"DESKTOP-OS7AALH","@version":"1","message":"2020-03-18 09:33:50,055 INFO main VersionLogger: com.ingalys.fmk2-soa#soa-resource-finder;1.79.0\r"}},{"_index":"replayerpattern6","_type":"default","_id":"ROKjKnEBNvctf59-leY9","_score":1,"_source":{"@timestamp":"2020-03-30T08:51:33.402Z","host":"DESKTOP-OS7AALH","@versio"

Seems like you are hitting the max post size here - you can also upload it on a service like Github gists or pastebin and link here

Ohhh ok let me try the last. thanks so much!

It's still cuts off somewhere in the middle - please make sure to capture the whole thing

Excuse me so much, I did a mistake, i think now is the total version right?

flash1293, were you able to see the file that I attached yesterday?

Thanks in advance,

OK, that's was actually a tricky one.

The vega lite version currently used by Kibana is not dealing well with dots in the field names used to build a stacked chart.

This worked for me (just copying the relevant part):

 "transform": [
    {"calculate": "toDate(datum._source['@timestamp'])", "as": "time"},
    {"filter": {"field": "_source.values_names", "oneOf": ["max", "consumed"]}},
    {"calculate": "datum._source.values_names", "as": "values_names"},
    {"calculate": "datum._source.values", "as": "values"}
  "encoding": {
    "x": {"field": "time", "type": "temporal"},
    "y": {"field": "values", "type": "quantitative", "aggregate": "sum"},
    "color": {"field": "values_names", "type": "nominal"}
  "autosize": {"type": "fit", "contains": "padding"}

It first copies the relevant values into field names without dots and then works on top of those.

That being said, currently your vega visualization is extremely inefficient - it basically pulls down all of your data and then does all of the heavy aggregation work in the browser. Imagine what will happen if you have gigabytes of data - every time you go to a dashboard, every user would have to download all of the gigabytes of data and then aggregate them locally.

I suggest looking into how Elasticsearch is doing aggregations - by specifying them in the query part instead of just streaming all of the documents, you can move the heavy computation into your Elasticsearch cluster where it belongs (and which can deal with it much better). is a nice starting point (especially the "Dynamic Data with Elasticsearch and Kibana" section)

1 Like

So, if i understand, you recommend me to query my data section with an aggregation before launching all the procedure not to pull down all the data there. Well you have reason i'm just starting using kibana and elastic seach two weeks before, but I was unable to query the values of a single field there, however, if for example I only select in the source the columns "values", "date_time" , and "values_names" do you thing is good enough.

Thanks so much,

So, if i understand, you recommend me to query my data section with an aggregation before launching all the procedure not to pull down all the data there ?. Well you have reason, i'm just starting using kibana and elastic seach two weeks before, but I was unable to query the values of a single field there, however, if for example I only select in the source the columns "values", "date_time" , and "values_names" do you think is good enough ?.

Thanks so much,

I already tried the solution and it was exactly what you have mentioned. It is about the dots!, I'm so happy because I having dealing with this almost one week. Thanks so much flash1292! now it works properly!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.