Design practices for hosting multiple clusters/on-demand cluster creation?

While ES is still in a pre deployment stage at my job, there is growing
interest in it. For various reasons, a monster cluster holding everyone's
stuff is simply not possible. Individual projects require complete control
over their data and the culture and security requirements here are such
that doing something like always naming project 1's indexes
PROJECT_1_ will not fly.
We have a fairly beefy hadoop cluster hosting our content currently, along
with a separate head node acting as the master.
In this situation, is it simply a matter of starting up new processes on
each node pointed at different configuration profiles and tying specific
ports to specific projects/clusters?

Basically, is there an established way to build on-demand clusters, given a
set of resources? We'll layer something in front of it to deal with access
control/etc.

Thanks!
-Josh

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ad2695f7-d1a2-4036-82b2-58bddf349681%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You could look at chef cookbook:


http://www.elasticsearch.org/tutorials/deploying-elasticsearch-with-chef-solo/

Does it help?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 8 janv. 2014 à 02:01, Josh Harrison hijakk@gmail.com a écrit :

While ES is still in a pre deployment stage at my job, there is growing interest in it. For various reasons, a monster cluster holding everyone's stuff is simply not possible. Individual projects require complete control over their data and the culture and security requirements here are such that doing something like always naming project 1's indexes PROJECT_1_ will not fly.
We have a fairly beefy hadoop cluster hosting our content currently, along with a separate head node acting as the master.
In this situation, is it simply a matter of starting up new processes on each node pointed at different configuration profiles and tying specific ports to specific projects/clusters?

Basically, is there an established way to build on-demand clusters, given a set of resources? We'll layer something in front of it to deal with access control/etc.

Thanks!
-Josh

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ad2695f7-d1a2-4036-82b2-58bddf349681%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/535D6769-0469-4BF8-9840-C67FA81CFD89%40pilato.fr.
For more options, visit https://groups.google.com/groups/opt_out.