We would like to license x-pack for our clusters, but the price is a bit out of our reach with our current configuration. We run a whole gob of ec2 m4.xlarge instances. Has anybody adjusted their cluster to run way fewer instances on much larger instances (8x-large or bigger) to be able to afford x-pack?
While we really want to support elastic, it seems a strange a way to license the product such that it adds forces affecting the deployment topology.
It's not at all a sizing issue. We have already worked out the size we need to handle the load. It's how instances are composed to meet that size: Some big nodes or lots of little nodes. We would like to have the flexibility to choose that ourselves but the licensing scheme forces our hand towards fewer nodes.
So I was asking if others have a similar situation and how they chose to resolve it - in the context of licensing not sizing.
From a performance perspective I generally see fewer larger nodes performing better than a large number of smaller nodes as there is less overhead in communication between nodes in the cluster. Having very few nodes that are very large can however cause issues when it comes to recovery, as the loss of a single node means that a significant portion of your data will need to re recovered. Depending on the use case, the balance can differ.
How many nodes do you have in your cluster? What is your use case?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.