You can go for split brain architecture where 2 master will be there. You can divide your heavy update load into two master.
This does not make any sense to me as master nodes are not involved in the request flow. If you were to index into a cluster suffering from split-brain you would also end up losing data.
@Christian_Dahlqvist thanks for correction. I ll edit my gist. But one doubt if i have two master node and i am having 500 write request at a time, so can i divide 250 request to one master and another 250 to other master. Would it be good ?
What matters is the number of data nodes, not whether they are master-eligible or not. The work is done at the shard level, so if you have enough cores available to process them all in parallel, I am not sure adding a second node on t he same hardware will help much.
If performance is limited by resources, scaling out to more nodes can greatly increase write performance as more hardware resources can be used. In your case you are however not saturating resources, so I am therefore not sure how much scaling out would actually help.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.