I have some documents, they are all of the same index and type, and have
the same fields, for example:
{"nation" : "China", "city" : "Tianjin", "year" : ["2011", "2012", "2013"]}
{"nation" : "USA", "city" : "Califorlia", "year" : ["2012", "2014", "2015"]}
{"nation" : "China", "city" : "Beijing", "year" : ["2012", "2014", "2015"]}
If I want to index them to my ES server all at once, I use the bulk
interface like this:
# curl -s -XPOST localhost:9200/_bulk --data-binary @data_file
the data_file looks like:
{"index" : {"_index" : "country", "_type" : "city"}}
{"nation" : "China", "city" : "Tianjin", "year" : ["2011", "2012", "2013"]}
{"nation" : "USA", "city" : "California", "year" : ["2012", "2014", "2015"]}
{"nation" : "China", "city" : "Beijing", "year" : ["2012", "2014", "2015"]}
But it only index the first document, If I change the data_file as the
following:
{"index" : {"_index" : "country", "_type" : "city"}}
{"nation" : "China", "city" : "Tianjin", "year" : ["2011", "2012", "2013"]}
{"index" : {"_index" : "country", "_type" : "city"}}
{"nation" : "USA", "city" : "Califorlia", "year" : ["2012", "2014", "2015"]}
{"index" : {"_index" : "country", "_type" : "city"}}
{"nation" : "China", "city" : "Beijing", "year" : ["2012", "2014", "2015"]}
it works, but the data_file becomes much bigger.
Is there a better way to import Json documents to ES ? Thanks very much.
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/adf0a1f7-bfab-4065-9ce6-c49da1286500%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.