So, I'm setting up our first (small) cluster (4 nodes, 1 client node & 3 data/master nodes) that we'll use to centralize the logs from our clusters of Squid proxies, and I had a quick question about the sizing : does the client node need as much memory as the data/master nodes ?
From what I understand of the docs it doesn't, but just to make sure
Thanks. Then I'll be going with half the memory of the other nodes as the queries won't be that complex, at least in the beginning. As all the nodes are virtual machines reconfiguring them can be done with just a service restart anyway.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.