Cut line with : %{DATA:date}[;]%{DATA:name_compt}[;]%{DATA:Application}[;]%{INT:volume}
Logstash's conf is :
filter {
if [type] == "q_compt" {
grok {
match => { "message" => "%{DATA:date}[;]%{DATA:nom_compt}[;]%{DATA:Application}[;]%{INT:volume}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
My aim is to create Kibana's Graph with this datas but why ?
I would like 3 graphs (3 applications), with data volume associated. (Be careful, the graph must respect the data at head of each line to stay consistent)
Did you already manage to ingest your data using the logstash configuration you quoted? If yes, then you should easily be able to create a line chart on the index pattern that matches your data. To have a separate chart for each application you can split the buckets by Terms:
Most aggregations only work on numeric fields. That means that the during ingestion the relevant data have to be indexed with one of the appropriate numeric types. The Logstash configuration above looks like using %{NUMBER:volume} should achieve that. See the grok filter documentation for more patterns.
In order for the changes to take effect, some or all of the following might be necessary:
Restart Logstash to apply the new configuration.
Delete the Elasticsearch index that contains the old mapping, in which the volume field is of type string. Existing field mappings can not be changed after an index has been created.
Click the "Reload" icon in the Kibana index pattern management screen to tell Kibana to re-read the mappings.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.