How Rally will behave with Millions of operations in track.json?

Hello all,

I have a use case wherein I am generating millions of queries and putting them into track.json as search operations. This will create a huge track.json file. Each elastic query itself is very huge containing a lot of filters. Will Rally be able to handle this?

Thanks,
Akhil

Generating an extremely large track.json file does not sound like a good idea, and I have a hard time seeing it working well. I would probably instead look into developing a custom parameter source that can read queries from one or more files and then use a search runner to execute them. Even though operations may not get individual names, you can add metadata to each query that allows you to analyse them in detail if you use an Elasticsearch instance as a metrics store.

My queries file will be huge so I cannot load it into memory and then process it. Other way is reading it from disk line by line in params function. Don't you think this io operation will create bottleneck?

You should be able to read it line by line in a custom parameter source.

It might slow it down, but if it does not fit in memory, what are the options?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.