Thanks for your quick reply. I appreciate your time and help. Is there any other way to do that? My document is huge it has almost 30 fields. and in a single json file I have 1million data. its quite impossible to transform the document because transformer java class getting OutOfMemmoryError.
Are you saying that a single elasticsearch document contains 1 million data?
Or did you mean that you have a bulk file which contains 1 million of individual JSON docs?
BTW you can have a look at Logstash to parse and transform your source file to something else.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.