Ingest several csv and create geo_point

Hello everyone,
I'm new to ELK and I have some questions regarding the ingestion of csv files and I couldn't find what I was looking for on the forum.

I want to ingest several csv into elasticsearch in different index and display their content on the map from Kibana. The csvs have almost the same schema (the header of the csv are the same but the type of each column the might change from one file to an other).

I manage to ingest one of the csv using logstash with the following configuration:

(I simplified the header of the csv for readability)

input {
    file {
        path => "path_to_my_first_csv"
        start_position => beginning
    }
}
filter {
    csv {
        columns => ["col1","col2","longitude","latitude"]
        separator => ","
        }

    mutate {
        convert => { "latitude" => "float" }
        convert => { "longitude" => "float" }
    }

    mutate {
        add_field => { "geo_location" => "%{latitude},%{longitude}" }
    }
}

output {
    stdout
    {
        codec => rubydebug
    }
     elasticsearch {
        action => "index"
        hosts => ["my_adress:9200"]
        index => "my_first_data"
    }
}

I added the "geo_location" to create a geo_point during mapping

for the mapping I created an "Index Template" in Stack Management\Index Management with the following mapping:

"mappings": {
	"dynamic": "true",
	"dynamic_date_formats": [
		"strict_date_optional_time",
		"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"
	],
	"dynamic_templates": [],
	"date_detection": true,
	"numeric_detection": false,
	"properties": {
		"col1": {
		"type": "keyword"
		},
		"col2": {
		"type": "keyword"
		},
		"geo_location": {
		"type": "geo_point"
		},
		"latitude": {
		"type": "float"
		},
		"longitude": {
		"type": "float"
		}
	}
}

After this I was able to retrieve my data and display it on the Map in Kibana.

My first question is, is this the correct way to ingest a single CSV and create a geo_point given latitude and longitude coordinates ? Are there more efficient way to ingest CSV file?
As of now it took me ~25minutes to ingest a csv with 2376299 rows and 40 columns (~400MB on disk). I was thinking of using GitHub - moshe/elasticsearch_loader: A tool for batch loading data files (json, parquet, csv, tsv) into ElasticSearch but I'm not sure this is the right approach.

Secondly, I created a python script to generate the mappings given a csv and I would like to use the generated mapping instead of having to create a Template manualy for each csv, who can I do this ?

Finally, as the logstash configuration is enough to ingest each of the csv how can I reuse it to ingest all of my csv but use my different mappings ? I would like to have a different index for each of the csv

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.