Export/Transfer from multiple nodes to one single node


#1

Hi,

We are currently running ES 2.1 in production and experiencing issues once we need to sort documents.

Our cluster is made of 3 nodes (heap: 15 GB). We have 3 indexes (5 shards). One of our indexes counts more than 310 000 000 documents,

I would like to make a test to run ES 6.3 on another server (and ONLY ONE server). The problem is that I have only 1 server available (4 vCPU, 65 GB...) to run these tests. These tests are not intended to demonstrate any performance but this is more a feasibility test.

The question: How could I do to get the content of ONE index and have it on ONE node?

Many thanks for your help


(David Turner) #2

I think reindex from remote is going to work for you here. Start up a 6.3 node and tell it to reindex the data that you want from your existing cluster.


#3

Thanks David. The only problem I have is that ES 2.1 is working in PROD (isolated domain), and my other server in DEV (secured). Both domain may no talk to each other. The only way is transfer data via file....


(David Turner) #4

In which case you will need to use snapshot and restore to snapshot your production data into somewhere that your development cluster can read it first.

This is a bit tricky because you cannot restore a 2.1 snapshot directly into a 6.3 cluster since the versions are too different. I would start a 2.x node in your development environment, restore the snapshot into that, and then use reindex-from-remote to upgrade it from 2.x to 6.x. Although you only have one development machine available, it should be no problem to run a 2.x node and a 6.x node side-by-side. The main difficulty is to keep track of which one is which - I recommend setting http.port explicitly on each so you know which is listening on which port, and setting path.data on each so their data directories don't get entangled. You might also want to set discovery.zen.ping.unicast.hosts: [] to stop them discovering each other and complaining about being incompatible.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.