Hi,
Currently, I got a ES cluster with datas on 1.3.4 version and I want to migrate them on a 1.5.x cluster.
How do I proceed ?
Hi,
Currently, I got a ES cluster with datas on 1.3.4 version and I want to migrate them on a 1.5.x cluster.
How do I proceed ?
Do you want to upgrade the existing cluster in place or migrate the data to a new 1.5-based cluster?
(You know that ES 1.7 has been out for quite a while, right?)
Yes I'd like to migrate the cluster (setup the new version) and migrate existing datas in the new cluster.
You should really move to 1.7.2 as Magnus mentioned.
Otherwise you have a few options;
Thank you for your answers.
Just one more question : does "migrate all data" means to move the old base content in the directory defined by the parameter path.data ?
For ex: if I defined in my old elasticsearch.yml "path.data" to "/data/es_data", I could reconfigure this value on my new config file then add the disabled allocation option. I just simply copy all the files from {cluster_name}/nodes/* in the same directory than before but emptied
Next, I add my nodes by starting ES. The data redindexation should be automatic.
Is that correct or should I do this differently ?
You could do that, but it won't reindex anything.
All right.
What do you suggest instead. Could you also give me some command examples please ?
Thank you
This is how I reindex, with Logstash - https://gist.github.com/markwalkom/8a7201e3f6ea4354ae06
Thanks,
How do I make sure every data is migrated. Is there a command to check that out ?
You can look at _cat/count
, or just _count
.
I meant that it seems logstash needs to be running a lot of time before everything is migrated.
Therefore, I was wondering if the way you launch logstash matters. Do you execute it in backend ?
Here's my command so far:
/opt/logstash/bin/logstash -f config_file.conf -l /var/log/logstash/reindex_es.log --debug --verbose
Unfortunately it's only when I launched it manually to get the number of input increase:
{:timestamp=>"2015-10-13T11:47:58.814000+0200", :message=>"Pipeline shutdown complete.", :level=>:info, :file=>"logstash/pipeline.rb", :line=>"89"}
{:timestamp=>"2015-10-13T11:50:30.603000+0200", :message=>"Pipeline shutdown complete.", :level=>:info, :file=>"logstash/pipeline.rb", :line=>"89"}
{:timestamp=>"2015-10-13T11:53:25.538000+0200", :message=>"Pipeline shutdown complete.", :level=>:info, :file=>"logstash/pipeline.rb", :line=>"89"}
{:timestamp=>"2015-10-13T12:03:51.620000+0200", :message=>"Pipeline shutdown complete.", :level=>:info, :file=>"logstash/pipeline.rb", :line=>"89"}
{:timestamp=>"2015-10-13T12:05:18.470000+0200", :message=>"Pipeline shutdown complete.", :level=>:info}
Ahh right, well it should just stop once it's done the last document.
That's my problem : it doesn't seem when it stops that every data is migrated, each time I execute it new data incomes...
Is there a problem with the way I proceed ?
Thank for you time
By the way, does the method change if we want to change ES 1.3.4 to 1.7.3 (so we could use Kibana 4.1.2) ?
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.