You mention you are running a 3 node cluster. The needed storage is the one per node or as sum? When I am testing a single node cluster, do I need to multiply storage by 3 or does it fit?
is it quit easily doable to reduce the needed events from 1billion to lets say 250 or 500 mio? What do I need to change?
You mention you are running a 3 node cluster. The needed storage is the one per node or as sum? When I am testing a single node cluster, do I need to multiply storage by 3 or does it fit?
Ignoring replicas, index size is independent of the number of nodes.
is it quit easily doable to reduce the needed events from 1billion to lets say 250 or 500 mio? What do I need to change?
From a Rally perspective this is definitely possible and we recommend to solve this via so-called track parameters that you can then pass as a command line parameter to Rally which in turn will apply it when loading the track. This track currently does not offer this as a track parameter but it would be definitely doable to add it and as an open source project we're always happy to receive pull requests.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.