Hey guys,
I have a two questions regarding indexing large csv files into Elasticsearch and mapping the data:
-
Should I first create an index and mapping and then send the csv data to Elastic?
If yes, could someone provide a code snippet / template for creating an index and mapping (corresponding to the csv header) with the elasticsearch-py package? -
I use helpers.bulk() to index my csv data, but I am not able to map the data with Python then in Elasticsearch. Could someone provide also a code snippet for indexing csv files into a specifical index and that the csv data corresponds to the mapping of this index?
Thanks in advance!