I have a use case wherein I am generating millions of queries and putting them into track.json as search operations. This will create a huge track.json file. Each elastic query itself is very huge containing a lot of filters. Will Rally be able to handle this?
Generating an extremely large track.json file does not sound like a good idea, and I have a hard time seeing it working well. I would probably instead look into developing a custom parameter source that can read queries from one or more files and then use a search runner to execute them. Even though operations may not get individual names, you can add metadata to each query that allows you to analyse them in detail if you use an Elasticsearch instance as a metrics store.
My queries file will be huge so I cannot load it into memory and then process it. Other way is reading it from disk line by line in params function. Don't you think this io operation will create bottleneck?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.