Splitting out Master / Data

Is there any good reason to split out a Master node and Data node if
both nodes are to be on the same server?

No, you probably don't really want to run several instance son the same
server.

On Sun, Dec 18, 2011 at 3:51 PM, Garth ghershfield.bah@gmail.com wrote:

Is there any good reason to split out a Master node and Data node if
both nodes are to be on the same server?

I should have specified the following: 2 x 8 core servers w 192GB
each. Yeah I know about the GC pause of Death! Would running several
ES nodes ( master+data, not split out ) be viable and or it's really
not good no matter what. Maybe 2 or 3 ES nodes w 50GB each per server.
We have Fusion IO cards, so not worried about that.

On Dec 20, 10:13 am, Shay Banon kim...@gmail.com wrote:

No, you probably don't really want to run several instance son the same
server.

On Sun, Dec 18, 2011 at 3:51 PM, Garth ghershfield....@gmail.com wrote:

Is there any good reason to split out a Master node and Data node if
both nodes are to be on the same server?

Yea, running multiple instances might make sense with so much memory,
though I work really hard to behave nicely to GC, so it would be
interesting to see how it behaves with a large heap. Note, if you want to
run several instances on the same box, a feature needs to be added to make
sure that a shard and a replica won't be allocated on the same machine
(not node), which is quite simple to add, can you open a feature?

On Thu, Dec 22, 2011 at 4:46 PM, Garth ghershfield.bah@gmail.com wrote:

I should have specified the following: 2 x 8 core servers w 192GB
each. Yeah I know about the GC pause of Death! Would running several
ES nodes ( master+data, not split out ) be viable and or it's really
not good no matter what. Maybe 2 or 3 ES nodes w 50GB each per server.
We have Fusion IO cards, so not worried about that.

On Dec 20, 10:13 am, Shay Banon kim...@gmail.com wrote:

No, you probably don't really want to run several instance son the same
server.

On Sun, Dec 18, 2011 at 3:51 PM, Garth ghershfield....@gmail.com
wrote:

Is there any good reason to split out a Master node and Data node if
both nodes are to be on the same server?

Garth,

Would appreciate any info you may be able to provide on ES performance with
Fusion IO, how it compares to normal hard drives, read/write rates etc.

Regards,

Berkay

On Thursday, December 22, 2011, Garth ghershfield.bah@gmail.com wrote:

I should have specified the following: 2 x 8 core servers w 192GB
each. Yeah I know about the GC pause of Death! Would running several
ES nodes ( master+data, not split out ) be viable and or it's really
not good no matter what. Maybe 2 or 3 ES nodes w 50GB each per server.
We have Fusion IO cards, so not worried about that.

On Dec 20, 10:13 am, Shay Banon kim...@gmail.com wrote:

No, you probably don't really want to run several instance son the same
server.

On Sun, Dec 18, 2011 at 3:51 PM, Garth ghershfield....@gmail.com wrote:

Is there any good reason to split out a Master node and Data node if
both nodes are to be on the same server?

--
Regards,
Berkay Mollamustafaoglu
Ph: +1 (571) 766-6292
mberkay on yahoo, google and skype

You have other issues with failure scenarios. Lets say you run 3-5x copies of ElasticSearch on one node. If that node crashes, you will have a lot of replication traffic to migrate off of the server to meet the replication agreement. Other issues arise from replica placement as Shay has stated. If you have 2/3 nodes placed on the same hosts, or even worse 3/3, then a crash of that system will cause a lot of availability / recovery issues.

Assuming that we allow to define that a shard and a replica will not be
allocated on the same "machine", then there ins't really a difference in
terms of moving data around. Running 3 nodes on each machine is not
different from running 1 node on each machine in terms of number of shards
allocated on that machine. So when it fails, the same amount of shards /
data will need to be moved.

On Sat, Dec 24, 2011 at 6:47 PM, phobos182 phobos182@gmail.com wrote:

You have other issues with failure scenarios. Lets say you run 3-5x copies
of
Elasticsearch on one node. If that node crashes, you will have a lot of
replication traffic to migrate off of the server to meet the replication
agreement. Other issues arise from replica placement as Shay has stated. If
you have 2/3 nodes placed on the same hosts, or even worse 3/3, then a
crash
of that system will cause a lot of availability / recovery issues.

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Splitting-out-Master-Data-tp3595620p3610768.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

Usually I see this type of setup where an individuals runs one "gigantic" server with say, 12x1TB Hard drives, or 12x 2TB Hard Disks. If that system fails, then you would potentially have to replicate multiple terabytes of information.

Not saying that every use would do this. Just a word of caution on creating gigantic servers while you could be better served with more - smaller servers.

You guys bring up some good points with respect to recovery. We might
go with VM's to the Fusion cards. If/when I have bench marks for
Fusion I/O performance I will post that information.

On Dec 25, 7:27 pm, phobos182 phobos...@gmail.com wrote:

Usually I see this type of setup where an individuals runs one "gigantic"
server with say, 12x1TB Hard drives, or 12x 2TB Hard Disks. If that system
fails, then you would potentially have to replicate multiple terabytes of
information.

Not saying that every use would do this. Just a word of caution on creating
gigantic servers while you could be better served with more - smaller
servers.

--
View this message in context:http://elasticsearch-users.115913.n3.nabble.com/Splitting-out-Master-...
Sent from the Elasticsearch Users mailing list archive at Nabble.com.