Create consistent graph in kibana

Hello & ty for your help !

Context actual :
My stack Filebeat > Logstash > Elastic > Kibana (5.1) is running

Bash script works every hour, to write data into file text named "q_compt". Data seems like to this :

Syntax : Date ; Name script ; Name application ; data volume

File text (q_compt) :
[...]
20170127065959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;Eroe;3218.4
20170127065959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;UCa;15840
20170127065959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;RM;30270
20170127075959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;Eroe;8298.2
20170127075959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;UCa;14385
20170127075959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;RM;32320
20170127085959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;Eroe;8056.4
20170127085959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;UCa;20210
20170127085959;AC_hour-C-FWK-BMA-EDR-2-Zone-C;RM;45020
[...]

Cut line with : %{DATA:date}[;]%{DATA:name_compt}[;]%{DATA:Application}[;]%{INT:volume}

Logstash's conf is :

filter {
if [type] == "q_compt" {
 grok {
 match => { "message" => "%{DATA:date}[;]%{DATA:nom_compt}[;]%{DATA:Application}[;]%{INT:volume}" }
}

date {
 match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
  }
 }
}

My aim is to create Kibana's Graph with this datas but why ?

I would like 3 graphs (3 applications), with data volume associated. (Be careful, the graph must respect the data at head of each line to stay consistent)

Can you see do to what i want ?

Did you already manage to ingest your data using the logstash configuration you quoted? If yes, then you should easily be able to create a line chart on the index pattern that matches your data. To have a separate chart for each application you can split the buckets by Terms:

Y-Axis : The Fields is offset only ..

Split Chart : Aggregation on Terms give fields on screenshot attachment.

Why ? :cry:

Ok to Split lines is good (I refresh my index pattern Filebeat).

But to Y-Axis when I choose Sum than aggregation i havven't any Field available (just offset)

Most aggregations only work on numeric fields. That means that the during ingestion the relevant data have to be indexed with one of the appropriate numeric types. The Logstash configuration above looks like using %{NUMBER:volume} should achieve that. See the grok filter documentation for more patterns.

Ok I change my pattern from

{ "message" => "%{DATA:date}[;]%{DATA:nom_compt}[;]%{DATA:Application}[;]%{INT:volume}" }

to

{ "message" => "%{DATA:date}[;]%{DATA:nom_compt}[;]%{DATA:Application}[;]%{NUMBER:volume}" }

But any change in Kibana. When i want configure my Y-Axis .. Always the offset only.

When I Go in parameter's field, i have 2 fields Applications :

  • volume (not aggregatable)
  • volume.keywords (aggregatable)

I don't understand where it's bloked

In order for the changes to take effect, some or all of the following might be necessary:

  1. Restart Logstash to apply the new configuration.
  2. Delete the Elasticsearch index that contains the old mapping, in which the volume field is of type string. Existing field mappings can not be changed after an index has been created.
  3. Click the "Reload" icon in the Kibana index pattern management screen to tell Kibana to re-read the mappings.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.