does anyone know any concern if I construct a cluster with data nodes with different disk size?
e.g.
5 data nodes with 200GB Disk
5 data nodes with 1TB disk and
5 data nodes with 10TB disk.
Will this cause any issue if the data size fill up those small nodes?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.