Line chart not displaying all dates


(Marco Goldin) #1

Hi, i'm using Kibana 5.4.
What could cause a simple line chart (Y-axis metrics: aggregation-count and x-axis bucket: Date Histogram) not extending through all the the dates of the year? As you can see the upper right date range is set from the beginning to the end of the year.

I have many documents in the index, each one with a date field, and dates go from january to december 2016. But chart keep showing only the first month.

{
        "_index": "transazioni",
        "_type": "prestito",
        "_id": "AVxeodlrc9vbQ3Ftg9pO",
        "_score": 1,
        "_source": {
          "titolo": "Il facilitatore",
          "IDMEDIA": "150096595",
          "IDENTE": "6",
          "autore": "Rizzo, Sergio",
          "IDUSER": "529922",
          "IDENTE_CONCEDENTE": "50",
          "@version": "1",
          "host": "localhost",
          "DATAINIZIO": "2016-01-06T20:42:05.000Z",
          "NARRATIVA": 1,
          "SAGGISTICA": 0,
          "IDSORGENTE": "94749",
          "PID": 1,
          "IDEDITORE": "9",
          "COD_CCE2": "JF",
          "message": """2016-01-06 20:42:05,150096595,330,529922,6,1,50,94749,9,Feltrinelli Editore,Il facilitatore,"Rizzo, Sergio",F,J,JF,Narrativa e argomenti correlati,SocietĂ  e scienze sociali,SocietĂ  e cultura: argomenti d'interesse generale,1,0""",
          "COD_CCE0": "F",
          "COD_CCE1": "J",
          "@timestamp": "2017-05-31T13:11:24.199Z",
          "IDTIPO": "330",
          "TITOLO0": "Narrativa e argomenti correlati",
          "NOME": "Feltrinelli Editore",
          "TITOLO2": "SocietĂ  e cultura: argomenti d'interesse generale",
          "TITOLO1": "SocietĂ  e scienze sociali"
        }

(Jon Budzenski) #2

The time filter in the top right will use the field configured when adding an index pattern. I notice "@timestamp": "2017-05-31T13:11:24.199Z" is out of range, so if it's your time field for example it would be cutting off documents. Is it possible this is what's causing issues?

If you need the time field configured as a different field, one option would be to use timelion with a different time field: `.es(index=transazioni, timefield=DATAINIZIO). If not I would recommend changing the time field of your index pattern, which can be done by re-adding it.


(Marco Goldin) #3

Thanky you Jon, you're absolutely right. However, as you suggested i deleted the index pattern and re-added it, setting my custom field as default timefield. But nothing changed, time range keeps cutting off documents. I need only a time field for the dashboard, so i really don't need @timestamp.

Should i reindex everything trying to overwrite @timestamp with logstash?


(Jon Budzenski) #4

A reindex shouldn't be neccessary, but it is strange that it isn't helping things. Can you try a full page refresh? If that doesn't work can you share the request tab from clicking the up arrow in the bottom left of the visualization.


(Marco Goldin) #5

Sure and thank you for your help. Tried full page refresh but didn't help unfortunately.
Just to test a different way i really reindexed everything, this time targeting @timestamp directly in logstash and set default time field "@timestamp" in the Kibana index pattern configuration setup.

"DATAINIZIO": "2016-06-11 09:48:14",
"@timestamp": "2016-06-11T09:48:14.000Z"

Here's the request tab:

{
  "size": 0,
  "query": {
    "bool": {
      "must": [
        {
          "query_string": {
            "analyze_wildcard": true,
            "query": "*"
          }
        },
        {
          "range": {
            "@timestamp": {
              "gte": 1451602800000,
              "lte": 1483225199999,
              "format": "epoch_millis"
            }
          }
        }
      ],
      "must_not": []
    }
  },
  "_source": {
    "excludes": []
  },
  "aggs": {
    "2": {
      "date_histogram": {
        "field": "@timestamp",
        "interval": "1M",
        "time_zone": "Europe/Berlin",
        "min_doc_count": 1
      }
    }
  }
}

Clearly i'm doing something wrong somewhere but i really can't make head nor tail of anything at this point. Thanks.


(Jon Budzenski) #6

The query and aggregation looks fine. For next steps I would start digging into the data. If you go to discover, for that time range, does the doc count look correct? If you check with a time range from april to december, are there > 0 documents? It might be worth checking most recent or all documents to see if there's a problem with the timestamp too.

If things still look strange, we can try and remove kibana from the equation and query elasticsearch directly:

curl -XGET "http://localhost:9200/transazioni/_search" -H 'Content-Type: application/json' -d'
{
    "query": {
        "range" : {
            "@timestamp" : {
                "gte" : 1451602800000,
                "lte" : 1483225199999,
                "format": "epoch_millis"
            }
        }
    }
}'

(Marco Goldin) #7

Oh my, you're right. After querying elastic directly i got suspicious. 0 documents.
I didn't notice a syntax error in logstash conf, where i typed "YYYY-MM-DD" and not "yyyy-MM-dd". So, after re-ingesting the data and configuring the index pattern with my custom field as default timestamp, everything worked super fine.

It's a bit strange though. Instead of getting an error, all documents were indexed with january (01).
Anyway, thank you so much for your time, problem solved, my bad.
You can close the topic now.


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.