My SW context : 5 nodes running ES 1.4 + 1 node running ES 1.3
i need to read node ES 1.4 ( One index + all data indexed) and write it
to nodeES 1.3 .
i have a java code that do the job but it firstly writes to HDFS and then
i need a second job
that read HDFS and send to ES 1.3
i was wondering if iit was possible to do all in one without storing on
HDFS
by interfacing both in the same job and java code
ES doc mentions ( but within the same job) :
es.resource.read (defaults to es.resource) Elasticsearch resource used for
reading (but not writing) data. Useful when reading and writing data to
different Elasticsearch indices within the same job.
es.resource.write
PS: i tried without success the backup/restore possibility from ES
My SW context : 5 nodes running ES 1.4 + 1 node running ES 1.3
i need to read node ES 1.4 ( One index + all data indexed) and write
it to nodeES 1.3 .
i have a java code that do the job but it firstly writes to HDFS and then
i need a second job
that read HDFS and send to ES 1.3
i was wondering if iit was possible to do all in one without storing on
HDFS
by interfacing both in the same job and java code
ES doc mentions ( but within the same job) :
es.resource.read (defaults to es.resource) Elasticsearch resource used
for reading (but not writing) data. Useful when reading and writing data to
different Elasticsearch indices within the same job.
es.resource.write
PS: i tried without success the backup/restore possibility from ES
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.