Other than it just not working for me at this point, I'm concerned because
if I have to edit the elasticsearch.yml file that means I must bring down
that instance.
If I end up with 1,2,3,4,10,20,30,40 nodes. How do I not bring them all
down for any changes. I'm sure there are different practices like a Master
that in the 40 case you would keep up all the time and only upgrade each
one individually.
As long as your nodes can talk to each other then you should be ok. However
if you are thinking about running cross DC clusters (reading between the
lines based on your hostname convention), then that is not recommended.
The rest of your question is based on how you expect to grow, but until you
get to more than 1 node you are probably going to have to restart your
current one.
Other than it just not working for me at this point, I'm concerned because
if I have to edit the elasticsearch.yml file that means I must bring down
that instance.
If I end up with 1,2,3,4,10,20,30,40 nodes. How do I not bring them all
down for any changes. I'm sure there are different practices like a Master
that in the 40 case you would keep up all the time and only upgrade each
one individually.
Are you saying it is not recommended to host across Data Centers?
How do you provide High Availability if one Data Center goes down?
And reading between the lines on more than 1 node issue. I'm guessing you
are saying that once I have multiple nodes I can bring them up and down as
needed. Really having a hard time understanding plugins in this scenario.
I don't want every server in the cluster to be executing say the jdbc river
do I? I only want one to be running it and then in the event that one
goes down another would pick it up. As far as I know there isn't any
altering system.
On Monday, February 16, 2015 at 4:03:11 AM UTC-5, Mark Walkom wrote:
(I couldn't find your other posts on scaling?)
As long as your nodes can talk to each other then you should be ok.
However if you are thinking about running cross DC clusters (reading
between the lines based on your hostname convention), then that is not
recommended.
The rest of your question is based on how you expect to grow, but until
you get to more than 1 node you are probably going to have to restart your
current one.
On 14 February 2015 at 09:44, GWired <garrett...@gmail.com <javascript:>>
wrote:
Is there a How to guide along with best practices for Scale Out?
I have a single instance server of Elastic Search and I would like to
scale out.
From other posts in this group I have tried with no success to add an
additional server into the group.
Other than it just not working for me at this point, I'm concerned
because if I have to edit the elasticsearch.yml file that means I must
bring down that instance.
If I end up with 1,2,3,4,10,20,30,40 nodes. How do I not bring them all
down for any changes. I'm sure there are different practices like a Master
that in the 40 case you would keep up all the time and only upgrade each
one individually.
Yes, that is what I am saying. ES is latency sensitive and betting against
that can potentially cause problem.
You are better off using snapshot+restore or using your indexing method to
send to both clusters.
As for plugins, some require all nodes to run them, some don't. Ultimately
this comes down to what the plugin does and you have to work around/with it.
Are you saying it is not recommended to host across Data Centers?
How do you provide High Availability if one Data Center goes down?
And reading between the lines on more than 1 node issue. I'm guessing you
are saying that once I have multiple nodes I can bring them up and down as
needed. Really having a hard time understanding plugins in this scenario.
I don't want every server in the cluster to be executing say the jdbc river
do I? I only want one to be running it and then in the event that one
goes down another would pick it up. As far as I know there isn't any
altering system.
On Monday, February 16, 2015 at 4:03:11 AM UTC-5, Mark Walkom wrote:
(I couldn't find your other posts on scaling?)
As long as your nodes can talk to each other then you should be ok.
However if you are thinking about running cross DC clusters (reading
between the lines based on your hostname convention), then that is not
recommended.
The rest of your question is based on how you expect to grow, but until
you get to more than 1 node you are probably going to have to restart your
current one.
Other than it just not working for me at this point, I'm concerned
because if I have to edit the elasticsearch.yml file that means I must
bring down that instance.
If I end up with 1,2,3,4,10,20,30,40 nodes. How do I not bring them all
down for any changes. I'm sure there are different practices like a Master
that in the 40 case you would keep up all the time and only upgrade each
one individually.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.