Mutate convert - Logstash shut down

This config doesn't work (mutate, convert create shut down). Could somebody explain me a way to resolve this please?


# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  stdin {
    type => "stdin-type"
  }

  file {
    path => ["/var/dev/manobi/VolDistri_102020.csv"]
    start_position => "beginning"
  }
}

filter {
  csv {
    columns =>["IDAEP","NomAEP","Affermage","Departement","Commune","Arrondissement","LocGeo","LocPoint","VolumeDistribue","Mois","Année"]
    separator => ";"
  }
          mutate {
	     rename => {"LocGeo","LocGeoPt"}
          }
          mutate {
	     rename => {"VolumeDistribue","VolDistri"}
	  }
	  mutate {
	     convert => {"LocGeoPt","geo_point"}
	  }
	  mutate {
	     convert => {"VolumeDistri","long"}
	  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{+YYYY.MM.dd}"
    #document_id => "%{IDAEP}"
    #user => "elastic"
    #password => "changeme"
  }
  stdout { codec => rubydebug }
}

Without the mutate-convert part of the config, the data are imported well, so the problem is coming from this mutate-convert.

Regards.

mutate+rename takes a hash. Try

mutate {
    rename => {
            "LocGeo" => "LocGeoPt"
            "VolumeDistribue" => "VolDistri"
  }
}

Even if you fixed this to be a hash I would expect you to get a translation missing: en.logstash.agent.configuration.invalid_plugin_register error. This is because logstash does not have a geo_point type.

You will need to set the mapping of the index. You can do that directly, as shown in the geo_point documentation. Or you can use an index template to set the mapping. Then start over with a new index since the mapping is applied when the index is created.

Thank you Badger. Since your answer, some data’s upload have been done by creating a index template before and this, directly in Elasticsearch. Then and for now, filebeat and logstash are doing the job (push of the csv file with the mapping specified in the index template). It’s the very basic level (i.e. manual operations versus data stream or automated updates) at this stage and it’s interesting to move forward.

Basic level’s example :

1-Creation of an index template :

12 fields : ["AEV", "Affermage", "Departement", "SujetAppel", "NbAppels", "Resolutions", "NonResolus", "DelaiTraitement", "Localisation", "Taux", « Mois », "Annee"]
Type Keyword for "AEV", "Affermage", "Departement", "SujetAppel", "Annee".
Type long for "NbAppels", "Resolutions", "NonResolus", "DelaiTraitement".
Types date for « Mois », double for « Taux », geo_point for « Localisation ».

2-Creation of a config file :

root@VMDEV:/etc/logstash/conf.d# touch logstashcall.conf
root@VMDEV:/etc/logstash/conf.d# gedit logstashcall.conf

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
  beats {port =>5044}
  file {
    path => ["…../filename.csv"]
    start_position => "beginning"
  }
}
filter {
  csv {
    columns => ["AEV", "Affermage", "Departement", "SujetAppel", "NbAppels", "Resolutions", "NonResolus", "DelaiTraitement", "Localisation", "Taux", "Mois", "Annee"]
    separator => ";"
  }
}
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "index_pattern_name-%{+YYYY.MM.dd}"
    #document_id => "%{IDAEP}"
    #user => "elastic"
    #password => "changeme"
  }
  stdout { codec => rubydebug }
}

In the ouput section, write the index pattern name used for the creation of the index template.
Please, Maybe it’s not necessary to give the file path again Badger as it is mentioned in the filebeat.yml file ?

3-Update of filebeat.yml file et restart of filebeat

-Start of filebeat => root@VMDEV: service filebeat start

-Update .yml file => root@VMDEV:/etc/logstash/conf.d# gedit /etc/filebeat/filebeat.yml

-Restart filebeat => root@VMDEV:/etc/logstash/conf.d# service filebeat restart

4-Run of the config file

root@VMDEV:/usr/share/logstash# bin/logstash -f /etc/logstash/conf.d/logstashcall.conf

Then you will see your file in the indices (index management section of ELK’s web interface).

Test before run if needed :
root@VMDEV: bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/logstash.conf

NB : this is basic level, using an documen_id in the config file, coming from the .csv (ou json or other types) will be better for data’s updates later.

If you are reading the file using filebeat then do not use a file input to read all the data a second time.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.