Now I running esrally 0.3.2 successfully. And I also get the Final score. I have problem below:
Firstly: I think the score contain too many information, But as user I just need the most useful performance information of my elaticsearch?
Secondly: I don't know the detail meaning of many socore? just like:
1.Median Indexing Throughput [docs/s] | 12307 |, what is every doc size? I think one doc size is 1KB, the other doc is 1TB. It's very different between the little one and the large one.
2.Indexing time [min] | 164.045 |
| Merge time [min] | 32.3815 |
| Refresh time [min] | 8.82333 |
| Flush time [min] | 1.63852 |
| Merge throttle time [min] | 1.45482
Whether Indexing time is create index time or not?
Whether Flush time is flush index time or not?
Query latency default (90.0 percentile) [ms] | 68.8676 |
| Query latency default (99.0 percentile) [ms] | 77.6009 |
| Query latency default (100 percentile) [ms] | 78.8328
What's the meaning of 90.0/99.0/100 percentile? I don't konw the difference of them?
Median CPU usage (index) [%] | 887.7 |
| Median CPU usage (stats) [%] | 94.9 |
| Median CPU usage (search) [%] | 445.05 |
Why CPU usage percent > 100%? I know maybe relate about many core of server?
5.Total Young Gen GC [s] | 89.121 |
| Total Old Gen GC [s] | 12.274
I don't konw the meaning of Gen GC? Maybe I can google it.
I'd like to think the answer to something so profound would be;
Imagine a puddle waking up one morning and thinking, "This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it? In fact it fits me staggeringly well, may have been made to have me in it!" This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for. We all know that at some point in the future the Universe will come to an end and at some other point, considerably in advance from that but still not immediately pressing, the sun will explode. We feel there's plenty of time to worry about that, but on the other hand that's a very dangerous thing to say.
I think that is in the eye of the beholder. Rally is used by developers of Elasticsearch, sysadmins, application developers, consultants, ... so there is no one size fits all.
The documents depend on the track. You can look at them in ~/.rally/benchmarks/data/. E.g. for the "geonames" track, you have to look at ~/.rally/benchmarks/data/geonames.documents.json. Each line in this file is one document and they are usually similar in size.
This is the amount of time that the benchmarked clusters spent indexing. Note that this is not Wall clock time but total runtime (i.e. if you have two indexing threads running concurrently for 5 minutes, this would mean that we'd report 10 minutes).
Total runtime of the young generation garbage collector and the old generation garbage collector during the benchmark.
Index size is the final index size on your disk.
Totally written is the number of bytes that have been written to disk. As Lucene merges segments in the background you usually write more bytes than your final index size.
Note that these are not the names in the command line report but rather the metric keys that are stored internally. If you use a dedicated metrics store you can create graphs in Kibana based on this information.
If anything is unclear, bug reports or pull requests are welcome.
Hi, @danielmitterdorfer:
I'm very glad to see the new metric document, your are very efficient.
I have three question about what your said:
First, the two results of disk_io_write_bytes and disk_io_read_bytes are not included in my esrally running result.
Second, How to create graph in kibana based on this result information?
Now, I already installed kibana, and it running port is 5601. I already used it to analyze data, just need to import the result date to elasticsearch? If you have simple method, please tell me.
disk_io_write_bytes is the name of the metric in the metrics store. It's reported on the command line as "Totally written" (and converted to GB so it's easier to read). We do not report "disk_io_read_bytes" indeed on the command line. I did not consider this metric important enough for a summary.
If you're really interested in detailed analysis, you should not just look at the command line report but instead set up a dedicated Elasticsearch cluster as metrics store and point Kibana to it. The docs on advanced configuration in Rally show what you need to do to configure a dedicate Elasticsearch metrics store.
The raw information (i.e. the metrics records) is stored in the Elasticsearch metrics store. You then need to define queries in Kibana. The metrics documentation in Rally shows concrete examples of the structure of metrics records and also describe their meaning. Based on that you need to write the queries in Kibana in which you're interested. I have not added a tutorial or specific documentation on that but you can look at the Kibana dashboard of our nightly benchmarks. Although we grant anonymous users only read only rights, you can still explore everything and inspect which queries we used to build the graphs. That should get you started.
I am not sure to which of the many graphs in the blog post you refer to. Can you please post the link to the specific picture? For most pictures I mentioned the tool that has been used in the text of the blog post. The only graph that requires some custom tooling is the JIT compiler events graph (as the jit telemetry device emits only text). I only created this visualization for a one-time visualization. The rest is either done in Kibana or Java flight recorder.
glad that it worked out. The only thing that worries me is that your index size is steadily increasing. That does not look right (it should stay roughly constant; just check our nightly benchmark graph).
Hi, @danielmitterdorfer:
I think your need not worry, Because I only running once, the kibana result only show the two result below:
one is final_index_size_bytes; size: Index size [GB] | 3.26797
the other one is disk_io_write_bytes OR name:disk_io_write_bytes_index , size: Totally written [GB] | 20.7748
So, the result show is correct.
While compare to your link graph, because you test for a long time, so your index size is roughly constant.
Hi, @danielmitterdorfer:
I want to know the esrally running time, but from the running result, I can't find it.
Median Indexing Throughput [docs/s] | 12307
| Index size [GB] | 3.30111 |
| Totally written [GB] | 20.2123 |
| Total Young Gen GC [s] | 89.121 |
| Total Old Gen GC [s] | 12.274 |
Your know, from the above result, I get that I already writed 20.2123GB, and the Index throughput is 12307 doc/s, and every doc size is about 40Byte. So the written time is = (20.2123102410241024)/(1230740)=44086.276s = 734.771min=12.25h
Clearly, the calculate result is not correct.
So, I want to know the correct time of rally running?
Or, any other useful param about the disk written rate would be helpful.
Thanks!
Hi, @danielmitterdorfer:
From the log of esrally which location is
/home/elasticsearch/.rally/benchmarks/races/2016-08-12-01-59-58/LOCAL/logs, I get the time information as follows:
the esrally running begin time: 2016-08-12 01:59:58
the esrally running end time: 2016-08-12 02:27:52,
and the total time span is: 28m 54s.
20.2 GB have been written in total during the benchmark by Elasticsearch. However, this includes also segment merges of Lucene and if you compare that to the final index size you can see that they are a significant contributing factor. So the actual indexing operations are not the only cause of disk writes.
If you want to know the indexing rate in byte / s you should rather calculate 12307 [document / s] * 40 [byte / document] = 0.47MB/s (but this is just a very rough calculation).
I understand you're looking for some new metric. For which purpose do you want to use it (i.e. what information do you gain from it)? Maybe we can come up with something suitable then.
Hi @danielmitterdorfer:
Thank your for your reply.
I just what to know the rate of write date to disk.
Until now, I have no concrete scene, I just use esrally for elasticsearch performance testing .
As you said, the indexing rate is about 0.47MB/s, I think this rate is may be equal with the rate of write date to disk.
Until now, only me use esrally for elasticsearch performance testing in our team.
I'll testing it for several days, If It's robust enough, I'll recommend our team using it .
We could, for example, sample the number of bytes written once per second.
No, the indexing rate is definitely not the same as the number of bytes written / second to disk. Elasticsearch (or rather Lucene) has to manage other data structures too, it compresses content and it also merges segments. All these things (and probably a lot more) cause differences between the number of bytes written to disk and the indexing rate in bytes/s.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.