While ES is still in a pre deployment stage at my job, there is growing
interest in it. For various reasons, a monster cluster holding everyone's
stuff is simply not possible. Individual projects require complete control
over their data and the culture and security requirements here are such
that doing something like always naming project 1's indexes
PROJECT_1_ will not fly.
We have a fairly beefy hadoop cluster hosting our content currently, along
with a separate head node acting as the master.
In this situation, is it simply a matter of starting up new processes on
each node pointed at different configuration profiles and tying specific
ports to specific projects/clusters?
Basically, is there an established way to build on-demand clusters, given a
set of resources? We'll layer something in front of it to deal with access
control/etc.
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 8 janv. 2014 à 02:01, Josh Harrison hijakk@gmail.com a écrit :
While ES is still in a pre deployment stage at my job, there is growing interest in it. For various reasons, a monster cluster holding everyone's stuff is simply not possible. Individual projects require complete control over their data and the culture and security requirements here are such that doing something like always naming project 1's indexes PROJECT_1_ will not fly.
We have a fairly beefy hadoop cluster hosting our content currently, along with a separate head node acting as the master.
In this situation, is it simply a matter of starting up new processes on each node pointed at different configuration profiles and tying specific ports to specific projects/clusters?
Basically, is there an established way to build on-demand clusters, given a set of resources? We'll layer something in front of it to deal with access control/etc.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.