I would like to know how to move Elasticsearch indexes from one server to another one, with different versions of the software. Is there any plugin for it?
Could I do it with ES head plugin?
Use snapshot and restore - https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
Or Logstash with something like this - https://gist.github.com/markwalkom/8a7201e3f6ea4354ae06
You can shutdown the current node and then copy the data over too, but be careful with that.
But would Snapshot and restore work if the server is in another machine with another different cluster?
Yes, as long as you aren't going from a newer version to an older one.
I'm trying with >
curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {
"location": "/etc/elasticsearch/backups/",
"compress": true
}
}'
but I get > No handler found for uri [/_snapshot/my_backup] and method [PUT]
...Sorry I'm very newbie in Elasticsearch
Are you using elasticsearch 0.90?
I have tried this logstash configuration but when I have a big amount of data it stucks at 50000. And I have no idea why..
In this image should appear more eventsbetween 3.00 to 23.00, and I'm receiving no data. It seems like if it has received only the first 50000 messages and stop in that moment...
And I'm obtaining warnings in Kibana like --> "Courier Fetch: 1 of 145 shards failed."
No, is 1.1.1
I've got the same problem in 1.7. Solved changing to the right verb, POST not PUT.
https://www.elastic.co/guide/en/elasticsearch/reference/1.7/modules-snapshots.html
Andrea
I am facing a similar challenge with elasticsearch 2.3.2
In terms of an strict deployment process I have to deploy ES indexes on a test, staging and production environment.
The index binaries are created on a dev environment. And we are copying them to all platforms as you described (shutdown the ES node/cluster and copying data over)
My question is first of all, is this a supported approach?
and second, why do you mean that one should be careful? Which are the critical aspects?
Thanks for your advice
It's not really supported no, we'd suggest snapshots instead.
Hi again @warkolm,
Thanks for your answer. But could you be a bit more specific.
I mean, in the second post of this discussion you mentioned that this is also an approach to go.
We have had success using this approach. But before we continue using it, we would like to know which aspects are important/critical to take care of.
So my question is, why it is not supported? which are the disadvantages of doing in this way?
Snapshots represent a further time/resource consuming task.
Thanks again for your advice
It's not supported because if it breaks then we won't put fixes into the software to help this use.
If you snapshot and something goes wrong, then if we can fix it we will.
Hi Mark,
Can you please elaborate on the option of copying files. We are looking for a super fast copying of an index (1 Shard, Read-Only, ~5GB, ~1M documents) from an indexing cluster to a query cluster. The source cluster has 0 replicas target cluster has 47 replicas. When using backup-restore it is taking ~20Min.
How can we take advantage of the fact that the index is merged and is read only for taking the minimal path.
Any thoughts on the subject would be appreciated!
Baruch.
Shut the node down, copy the data, start it back up and hope it works.