How to customize the summary report?

now, I have a pressure measurement scenarios with a set of tests, and each test (with different variable) should be run once to completion, one after another. So, the trial-timestamp of each test is different.when I analyse the summary report in kibana,I want to put all tests together at the same time point.

Can I add a key-value pari into the summary report,like round_time: 20180927T070012Z or Is there any other way to solve it ?

You can use the --user-tag option to inject as many comma separated key value pairs.

e.g. if you specify --user-tag="round_time:2222" this will be available in every doc as meta.round_time.

If I don not misunderstand, meta.round_time will appear in the index rally-metrics-* ?
But now in kibana, the index I need analyse is rally-results-*

The summary results are not very flexible as far as I know, so I usually analyse the raw metrics in Kibana. This is where the user tags are useful. I you want to compare different sections or runs over time, measured from the start, you should look at the relative-time field in the Rally metrics.

:ok_hand: Thanks.

hope that we can customize some values in summary report in the future.


I found that the index rally-metrics-* does not include metric error_rate, but in my test scenario, I need focus on this metric.
So, If I git clone and then add some code in ./esrally/ like:

import os

class Stats:
    def __init__(self, d=None):

        self.segment_count = self.v(d, "segment_count")

        self.round_time = os.environ['RALLY_ROUND_TIME']

the index rally-results-* will include round_time: xxx ?

If it works, I want to konw how to compile and install Rally with source code now ?


The rally-metrics-* indices contain a couple of documents per operation, and there should be a field on it that indicates whether the operation was successful or not. You should be able to use this to report on the error rate.


I use the VirtualEnv to install Rally and the version is 1.0.0, the source code locates at ./lib/python3.4/site-packages/esrally.

In the ./, I found the code that index the result to ES, So I set a breaking point:

class EsResultsStore:
    Stores the results of a race in a format that is better suited for reporting with Kibana.
    INDEX_PREFIX = "rally-results-"
    RESULTS_DOC_TYPE = "results"

    def __init__(self, cfg, client_factory_class=EsClientFactory, index_template_provider_class=IndexTemplateProvider):
        Creates a new results store.

        :param cfg: The config object. Mandatory.
        :param client_factory_class: This parameter is optional and needed for testing.
        :param index_template_provider_class: This parameter is optional and needed for testing.
        self.cfg = cfg
        self.trial_timestamp = cfg.opts("system", "time.start")
        self.client = client_factory_class(cfg).create()
        self.index_template_provider = index_template_provider_class(cfg)

    def store_results(self, race):
        # always update the mapping to the latest version
        import pdb

        self.client.put_template("rally-results", self.index_template_provider.results_template())
        self.client.bulk_index(index=self.index_name(), doc_type=EsResultsStore.RESULTS_DOC_TYPE, items=race.to_result_dicts())

    def index_name(self):
        return "%s%04d-%02d" % (EsResultsStore.INDEX_PREFIX, self.trial_timestamp.year, self.trial_timestamp.month)

it does not reach this step, but the summary report has been indexed into ES.

And then,In the ./, I found that you use the godaddy/Thespian to achieve the actor model. I also set a breaking point:

def race(cfg, sources=False, build=False, distribution=False, external=False, docker=False):
    logger = logging.getLogger(__name__)
    # at this point an actor system has to run and we should only join
    actor_system = actor.bootstrap_actor_system(try_join=True)
    benchmark_actor = actor_system.createActor(BenchmarkActor, targetActorRequirements={"coordinator": True})
        import pdb

        result = actor_system.ask(benchmark_actor, Setup(cfg, sources, build, distribution, external, docker))
        if isinstance(result, Success):
  "Benchmark has finished successfully.")
        # may happen if one of the load generators has detected that the user has cancelled the benchmark.
        elif isinstance(result, actor.BenchmarkCancelled):
  "User has cancelled the benchmark (detected by actor).")
        elif isinstance(result, actor.BenchmarkFailure):
            logger.error("A benchmark failure has occurred")
            raise exceptions.RallyError(result.message, result.cause)
            raise exceptions.RallyError("Got an unexpected result during benchmarking: [%s]." % str(result))
    except KeyboardInterrupt:"User has cancelled the benchmark (detected by race control).")
        # notify the coordinator so it can properly handle this state. Do it blocking so we don't have a race between this message
        # and the actor exit request.
        actor_system.ask(benchmark_actor, actor.BenchmarkCancelled())
    finally:"Telling benchmark actor to exit.")
        actor_system.tell(benchmark_actor, thespian.actors.ActorExitRequest())

when I input n in debug model, it just run and then return success!!

Can you tell me what happened? Why it seems to do not run the code that index the summary report to ES, but the report has been indexed in fact ?
Now, I want to add some key-value paris, where should I change?



I presume that you used pdb.set_trace() and not pdb.set_trtace() as the latter is not a valid attribute in the module.

As you correctly noticed Rally relies on the Thespian actor system and Rally uses by default the multiprocTCPBase over IPv4/TCP port 1900. As a result of using an asynchronous distributed model with actors you won't be able to utilize debuggers like pdb/ipdb, as separate processes get launched by Rally and communication is done over TCP. If you insist on setting break points you can try to use the simpleSystemBase for troubleshooting; this will run the actors synchronously and won't support running multiple Rally (load generator) processes on different machines.

The best, most reliable and fastest way to develop is to add corresponding test cases in metrics_test. Depending on which index you want to add your additional metrics in, you'll need to adjust the test cases in either EsMetricsTests class or EsResultsStoreTests.

1 Like


I am on holiday these days, I am deeply sorry for my late reply.

this is my spelling mistake when asking this question, in fact I use the pdb.set_trace().

I found that modifying the source code may not be a good way to solve this problem, I will try other ways, thanks a lot.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.