I would like to know if it is possible a Loadbalancer to send requests to elastic node 1, which is a primary node and elastic node 2 which is a disaster recovery node to share the same set of documents, which are placed in a single folder on the shared storage?
If this is not possible, why?
In such situation, can I have a third node - a replica for queries that are 3 months old and more?
You can use, eg, /mount/elasticsearch/data and point both nodes to it. Then ES will automatically create a subdirectory with the name of the cluster and then each node will create it's own directory under that.
You will end up with something like /mount/elasticsearch/data/clustername/nodes/0 and /mount/elasticsearch/data/clustername/nodes/1.
But this does mean that I will store the same data twice on different folders. I mean can they both use /mount/elasticsearch/data/clustername/nodes/0 lets say?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.