Thank you Badger. Since your answer, some data’s upload have been done by creating a index template before and this, directly in Elasticsearch. Then and for now, filebeat and logstash are doing the job (push of the csv file with the mapping specified in the index template). It’s the very basic level (i.e. manual operations versus data stream or automated updates) at this stage and it’s interesting to move forward.
Basic level’s example :
1-Creation of an index template :
12 fields : ["AEV", "Affermage", "Departement", "SujetAppel", "NbAppels", "Resolutions", "NonResolus", "DelaiTraitement", "Localisation", "Taux", « Mois », "Annee"]
Type Keyword for "AEV", "Affermage", "Departement", "SujetAppel", "Annee".
Type long for "NbAppels", "Resolutions", "NonResolus", "DelaiTraitement".
Types date for « Mois », double for « Taux », geo_point for « Localisation ».
2-Creation of a config file :
root@VMDEV:/etc/logstash/conf.d# touch logstashcall.conf
root@VMDEV:/etc/logstash/conf.d# gedit logstashcall.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {port =>5044}
file {
path => ["…../filename.csv"]
start_position => "beginning"
}
}
filter {
csv {
columns => ["AEV", "Affermage", "Departement", "SujetAppel", "NbAppels", "Resolutions", "NonResolus", "DelaiTraitement", "Localisation", "Taux", "Mois", "Annee"]
separator => ";"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "index_pattern_name-%{+YYYY.MM.dd}"
#document_id => "%{IDAEP}"
#user => "elastic"
#password => "changeme"
}
stdout { codec => rubydebug }
}
In the ouput section, write the index pattern name used for the creation of the index template.
Please, Maybe it’s not necessary to give the file path again Badger as it is mentioned in the filebeat.yml file ?
3-Update of filebeat.yml file et restart of filebeat
-Start of filebeat => root@VMDEV: service filebeat start
-Update .yml file => root@VMDEV:/etc/logstash/conf.d# gedit /etc/filebeat/filebeat.yml
-Restart filebeat => root@VMDEV:/etc/logstash/conf.d# service filebeat restart
4-Run of the config file
root@VMDEV:/usr/share/logstash# bin/logstash -f /etc/logstash/conf.d/logstashcall.conf
Then you will see your file in the indices (index management section of ELK’s web interface).
Test before run if needed :
root@VMDEV: bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/logstash.conf
NB : this is basic level, using an documen_id in the config file, coming from the .csv (ou json or other types) will be better for data’s updates later.