The AWS d2.2xlarge has 12 TB volume storage. Will single ES be able to handle that much data from a single node?
Please could anyone help me getting this sorted out.
The AWS d2.2xlarge has 12 TB volume storage. Will single ES be able to handle that much data from a single node?
Please could anyone help me getting this sorted out.
As outlined in this webinar that often depends on you data and your use case and how efficiently you can manage heap usage.
We are having Apache Logs grokking them and I am choosing AWS EC2 - The AWS d2.2xlarge has 12 TB volume storage.
Will this work fine.
As I said that will depend on how you manage mappings and heap usage and what your requirements around query latency is. You need to test. Getting to that level of storage per node for small documents like Apache logs will require quite a lot of optimisation so I would not be surprised if find in your tests that the practical limit is lower for your use case.
If you are planning on running a single node, also be aware that the data could be lost if there are issues with the host as the storage is ephemeral.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.