Really impossible to answer as so little information is supplied.
Technically you certainly can setup a 2-node cluster, with enough space to handle your daily volume, but I presume there will be tools querying the data, and data needs to be ingested, and maybe somehow transformed or enriched. How many replicas of the data do you expect to need? How much CPU power for handling the queries. Do you even already know the main queries?
And it seems you are asking before you tried anything yet?
Your 100gig of daily data, do you understand its structure, before and once indexed into elasticsearch? Do you have a mapping? Is it something generic like "logs"? Or something more exotic?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.