We're planning to add a new Ingest node to running ES cluster, this node is dedicated only for data ingestion. How to do capacity planning for ingest node, does this memory optimized or CPU optimized, to ingest Index size of around 1TB into the ES cluster (new Index weekly basis)? Please give me an advice.
@chateesh You have one pipeline but not sure how many processor it has ? Also, it will depnds on your index ration like per min how many documents are you indexing.
So my suggestion will be do POC with 1 week data and test load on node and based on that decide on node configuration.
Please check Elastic blog which will give you more understanding of benchmarking and sizing cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.