Greater throughput in Standalone nodes compared to Multiple instances in same Hardware

We found below mentioned stats using esrally on comparing multiple instances on same node with standalone instances .Everything was same including configs and cluster size .


|                                                        Metric |                        Task |        Baseline |       Contender |        Diff |   Unit |   Diff % |
|--------------------------------------------------------------:|----------------------------:|----------------:|----------------:|------------:|-------:|---------:|
|                    Cumulative indexing time of primary shards |                             |    43.7738      |    43.0416      |    -0.73215 |    min |   -1.67% |
|             Min cumulative indexing time across primary shard |                             |     3.51615     |     2.784       |    -0.73215 |    min |  -20.82% |
|          Median cumulative indexing time across primary shard |                             |    21.8869      |    21.5208      |    -0.36608 |    min |   -1.67% |
|             Max cumulative indexing time across primary shard |                             |    40.2576      |    40.2576      |     0       |    min |    0.00% |
|           Cumulative indexing throttle time of primary shards |                             |     0           |     0           |     0       |    min |    0.00% |
|    Min cumulative indexing throttle time across primary shard |                             |     0           |     0           |     0       |    min |    0.00% |
| Median cumulative indexing throttle time across primary shard |                             |     0           |     0           |     0       |    min |    0.00% |
|    Max cumulative indexing throttle time across primary shard |                             |     0           |     0           |     0       |    min |    0.00% |
|                       Cumulative merge time of primary shards |                             |    13.5999      |    13.5093      |    -0.09058 |    min |   -0.67% |
|                      Cumulative merge count of primary shards |                             |    61           |    61           |     0       |        |    0.00% |
|                Min cumulative merge time across primary shard |                             |     0.8152      |     0.724617    |    -0.09058 |    min |  -11.11% |
|             Median cumulative merge time across primary shard |                             |     6.79993     |     6.75464     |    -0.04529 |    min |   -0.67% |
|                Max cumulative merge time across primary shard |                             |    12.7847      |    12.7847      |     0       |    min |    0.00% |
|              Cumulative merge throttle time of primary shards |                             |     3.96685     |     4.02123     |     0.05438 |    min |   +1.37% |
|       Min cumulative merge throttle time across primary shard |                             |     0.104667    |     0.15905     |     0.05438 |    min |  +51.96% |
|    Median cumulative merge throttle time across primary shard |                             |     1.98342     |     2.01062     |     0.02719 |    min |   +1.37% |
|       Max cumulative merge throttle time across primary shard |                             |     3.86218     |     3.86218     |     0       |    min |    0.00% |
|                     Cumulative refresh time of primary shards |                             |     0.668033    |     0.557933    |    -0.1101  |    min |  -16.48% |
|                    Cumulative refresh count of primary shards |                             |    71           |    69           |    -2       |        |   -2.82% |
|              Min cumulative refresh time across primary shard |                             |     0.20795     |     0.09785     |    -0.1101  |    min |  -52.95% |
|           Median cumulative refresh time across primary shard |                             |     0.334017    |     0.278967    |    -0.05505 |    min |  -16.48% |
|              Max cumulative refresh time across primary shard |                             |     0.460083    |     0.460083    |     0       |    min |    0.00% |
|                       Cumulative flush time of primary shards |                             |     1.08253     |     1.08363     |     0.0011  |    min |   +0.10% |
|                      Cumulative flush count of primary shards |                             |    27           |    26           |    -1       |        |   -3.70% |
|                Min cumulative flush time across primary shard |                             |     0.0331      |     0.0342      |     0.0011  |    min |   +3.32% |
|             Median cumulative flush time across primary shard |                             |     0.541267    |     0.541817    |     0.00055 |    min |   +0.10% |
|                Max cumulative flush time across primary shard |                             |     1.04943     |     1.04943     |     0       |    min |    0.00% |
|                                       Total Young Gen GC time |                             |     0.124       |     0.305       |     0.181   |      s | +145.97% |
|                                      Total Young Gen GC count |                             |     4           |     4           |     0       |        |    0.00% |
|                                         Total Old Gen GC time |                             |     0           |     0           |     0       |      s |    0.00% |
|                                        Total Old Gen GC count |                             |     0           |     0           |     0       |        |    0.00% |
|                                                    Store size |                             |     6.38997     |     6.29066     |    -0.09931 |     GB |   -1.55% |
|                                                 Translog size |                             |     1.02445e-07 |     1.02445e-07 |     0       |     GB |    0.00% |
|                                        Heap used for segments |                             |     1.7454      |     1.80674     |     0.06134 |     MB |   +3.51% |
|                                      Heap used for doc values |                             |     1.18086     |     1.24189     |     0.06104 |     MB |   +5.17% |
|                                           Heap used for terms |                             |     0.538544    |     0.538757    |     0.00021 |     MB |   +0.04% |
|                                           Heap used for norms |                             |     0           |     0           |     0       |     MB |    0.00% |
|                                          Heap used for points |                             |     0           |     0           |     0       |     MB |    0.00% |
|                                   Heap used for stored fields |                             |     0.0259933   |     0.0260849   |     9e-05   |     MB |   +0.35% |
|                                                 Segment count |                             |    49           |    49           |     0       |        |    0.00% |
|                                   Total Ingest Pipeline count |                             |     0           |     0           |     0       |        |    0.00% |
|                                    Total Ingest Pipeline time |                             |     0           |     0           |     0       |     ms |    0.00% |
|                                  Total Ingest Pipeline failed |                             |     0           |     0           |     0       |        |    0.00% |
|                                                Min Throughput |                index-append |  2731.93        |  2326.09        |  -405.839   | docs/s |  -14.86% |
|                                               Mean Throughput |                index-append | 29932.9         | 27532.3         | -2400.57    | docs/s |   -8.02% |
|                                             Median Throughput |                index-append | 32342.8         | 30138.8         | -2203.97    | docs/s |   -6.81% |
|                                                Max Throughput |                index-append | 37778           | 35375           | -2403       | docs/s |   -6.36% |
|                                       50th percentile latency |                index-append |  1644.41        |  1734.2         |    89.784   |     ms |   +5.46% |
|                                       90th percentile latency |                index-append |  2259.92        |  2836.4         |   576.48    |     ms |  +25.51% |
|                                       99th percentile latency |                index-append |  3795.3         |  4025.75        |   230.444   |     ms |   +6.07% |
|                                      100th percentile latency |                index-append |  3850.48        |  4267.93        |   417.455   |     ms |  +10.84% |
|                                  50th percentile service time |                index-append |  1644.41        |  1734.2         |    89.784   |     ms |   +5.46% |
|                                  90th percentile service time |                index-append |  2259.92        |  2836.4         |   576.48    |     ms |  +25.51% |
|                                  99th percentile service time |                index-append |  3795.3         |  4025.75        |   230.444   |     ms |   +6.07% |
|                                 100th percentile service time |                index-append |  3850.48        |  4267.93        |   417.455   |     ms |  +10.84% |
|                                                    error rate |                index-append |     0           |     0           |     0       |      % |    0.00% |
|                                                Min Throughput |         auto-date-histogram |     2.01315     |     2.01316     |     1e-05   |  ops/s |    0.00% |
|                                               Mean Throughput |         auto-date-histogram |     2.02169     |     2.02167     |    -2e-05   |  ops/s |   -0.00% |
|                                             Median Throughput |         auto-date-histogram |     2.01971     |     2.0197      |    -2e-05   |  ops/s |   -0.00% |
|                                                Max Throughput |         auto-date-histogram |     2.03902     |     2.03898     |    -4e-05   |  ops/s |   -0.00% |
|                                       50th percentile latency |         auto-date-histogram |     4.86588     |     4.62555     |    -0.24033 |     ms |   -4.94% |
|                                       90th percentile latency |         auto-date-histogram |     7.57353     |     7.5649      |    -0.00863 |     ms |   -0.11% |
|                                       99th percentile latency |         auto-date-histogram |    11.5494      |    10.8398      |    -0.70967 |     ms |   -6.14% |
|                                      100th percentile latency |         auto-date-histogram |    13.3216      |    71.4137      |    58.092   |     ms | +436.07% |
|                                  50th percentile service time |         auto-date-histogram |     3.55805     |     3.31439     |    -0.24366 |     ms |   -6.85% |
|                                  90th percentile service time |         auto-date-histogram |     5.74882     |     6.31829     |     0.56948 |     ms |   +9.91% |
|                                  99th percentile service time |         auto-date-histogram |    10.7808      |     9.29842     |    -1.48233 |     ms |  -13.75% |
|                                 100th percentile service time |         auto-date-histogram |    12.092       |    70.4213      |    58.3293  |     ms | +482.38% |
|                                                    error rate |         auto-date-histogram |     0           |     0           |     0       |      % |    0.00% |
|                                                Min Throughput | auto-date-histogram-with-tz |     2.01323     |     2.01326     |     3e-05   |  ops/s |    0.00% |
|                                               Mean Throughput | auto-date-histogram-with-tz |     2.02184     |     2.02187     |     3e-05   |  ops/s |    0.00% |
|                                             Median Throughput | auto-date-histogram-with-tz |     2.01985     |     2.01986     |     1e-05   |  ops/s |    0.00% |
|                                                Max Throughput | auto-date-histogram-with-tz |     2.03931     |     2.03933     |     2e-05   |  ops/s |    0.00% |
|                                       50th percentile latency | auto-date-histogram-with-tz |     4.92118     |     4.4825      |    -0.43868 |     ms |   -8.91% |
|                                       90th percentile latency | auto-date-histogram-with-tz |     7.21901     |     7.26418     |     0.04518 |     ms |   +0.63% |
|                                       99th percentile latency | auto-date-histogram-with-tz |     8.21246     |    10.112       |     1.89957 |     ms |  +23.13% |
|                                      100th percentile latency | auto-date-histogram-with-tz |     8.9293      |    10.154       |     1.2247  |     ms |  +13.72% |
|                                  50th percentile service time | auto-date-histogram-with-tz |     3.70096     |     3.22834     |    -0.47262 |     ms |  -12.77% |
|                                  90th percentile service time | auto-date-histogram-with-tz |     5.66746     |     5.90415     |     0.23669 |     ms |   +4.18% |
|                                  99th percentile service time | auto-date-histogram-with-tz |     6.6733      |     8.91066     |     2.23736 |     ms |  +33.53% |
|                                 100th percentile service time | auto-date-histogram-with-tz |     6.79047     |     8.9855      |     2.19503 |     ms |  +32.33% |
|                                                    error rate | auto-date-histogram-with-tz |     0           |     0           |     0       |      % |    0.00% |
|                                                Min Throughput |              date-histogram |     2.01305     |     2.01326     |     0.00022 |  ops/s |   +0.01% |
|                                               Mean Throughput |              date-histogram |     2.02154     |     2.0219      |     0.00036 |  ops/s |   +0.02% |
|                                             Median Throughput |              date-histogram |     2.01956     |     2.01989     |     0.00033 |  ops/s |   +0.02% |
|                                                Max Throughput |              date-histogram |     2.03874     |     2.03935     |     0.00061 |  ops/s |   +0.03% |
|                                       50th percentile latency |              date-histogram |     4.77303     |     4.50845     |    -0.26458 |     ms |   -5.54% |
|                                       90th percentile latency |              date-histogram |     6.30559     |     6.92416     |     0.61857 |     ms |   +9.81% |
|                                       99th percentile latency |              date-histogram |     7.19343     |     8.1561      |     0.96267 |     ms |  +13.38% |
|                                      100th percentile latency |              date-histogram |     7.69477     |     8.39413     |     0.69936 |     ms |   +9.09% |
|                                  50th percentile service time |              date-histogram |     3.46664     |     3.19766     |    -0.26898 |     ms |   -7.76% |
|                                  90th percentile service time |              date-histogram |     4.82828     |     5.41109     |     0.58281 |     ms |  +12.07% |
|                                  99th percentile service time |              date-histogram |     5.48328     |     6.07429     |     0.59101 |     ms |  +10.78% |
|                                 100th percentile service time |              date-histogram |     5.56259     |     6.10173     |     0.53914 |     ms |   +9.69% |
|                                                    error rate |              date-histogram |     0           |     0           |     0       |      % |    0.00% |
|                                                Min Throughput |      date-histogram-with-tz |     2.01326     |     2.01325     |    -1e-05   |  ops/s |   -0.00% |
|                                               Mean Throughput |      date-histogram-with-tz |     2.02184     |     2.02186     |     2e-05   |  ops/s |    0.00% |
|                                             Median Throughput |      date-histogram-with-tz |     2.01986     |     2.01987     |     1e-05   |  ops/s |    0.00% |
|                                                Max Throughput |      date-histogram-with-tz |     2.03929     |     2.03933     |     4e-05   |  ops/s |    0.00% |
|                                       50th percentile latency |      date-histogram-with-tz |     4.90122     |     4.37037     |    -0.53084 |     ms |  -10.83% |
|                                       90th percentile latency |      date-histogram-with-tz |     6.55565     |     7.09534     |     0.53969 |     ms |   +8.23% |
|                                       99th percentile latency |      date-histogram-with-tz |     7.09138     |     7.84138     |     0.75001 |     ms |  +10.58% |
|                                      100th percentile latency |      date-histogram-with-tz |     7.68749     |     8.32968     |     0.6422  |     ms |   +8.35% |
|                                  50th percentile service time |      date-histogram-with-tz |     3.49093     |     3.02958     |    -0.46135 |     ms |  -13.22% |
|                                  90th percentile service time |      date-histogram-with-tz |     5.02544     |     5.54303     |     0.51759 |     ms |  +10.30% |
|                                  99th percentile service time |      date-histogram-with-tz |     5.72166     |     6.01232     |     0.29067 |     ms |   +5.08% |
|                                 100th percentile service time |      date-histogram-with-tz |     5.72808     |     6.13651     |     0.40843 |     ms |   +7.13% |
|                                                    error rate |      date-histogram-with-tz |     0           |     0           |     0       |      % |    0.00% |

It is not clear to me exactly what you have benchmarked and compared. Can you please provide some additional information about the configuration of the benchmark, the track used, the hardware and the node configuration for the two scenarios?

1 Like

Track:-metricbeat
Cluster1(Baseline):-8 Datanodes running on virtual machines each (28GB JVM and 125 GB total memory)
Cluster2(Contender):-8 Datanodes running as four processses on one Baremetal each processes is running with pre-defined compute (28GB JVM and 125 GB total memory)

Both clusters have 1 master and search node of same config

What are you worried about specifically? Median index-append throughput? Throughput for higher percentiles is inherently more noisy so I would ignore it for now.

Have you tried running this test three times for each configuration? This would tell you how much variability there is for a given configuration. Maybe it's higher than the 7% median indexing throughput you're seeing here.

1 Like

No was just seeing better performance of vm based independent cluster than processes running on Baremetal .Also median index-append throughput is reaching 20% lower than standalone cluster when it was ran multiple times

Hey, from above comment I understand that this ( Median index-append throughput) metric is not of prime significance for evaluating two cluster baseline performance . Do let me know what is the metrics i should compare and rely on for testing a VM based Cluster and Pid based Cluster.

No, median indexing throughput is actually a good metric to use. We'll just need even more detail to understand why the performance could be different.

  • Are you using the same hardware for both clusters? Is it actually the same machine or two different ones?
  • Is it on cloud? (AWS has a baremetal offering)
  • What virtual machine technology are you using? Is there any isolation between them?
  • Can you be clearer about the 20% difference? The comparison you posted shows a 6.8% difference.

Yes we are using same hardware for both cluster's.
Pid based Cluster:-We are using two baremetal's having each bm has four instances of elasticsearch running on them resources(18 Cores 28GB JVM and 125 GB total memory for each pid) we have defined.
Vm based cluster:-We are using two baremetal's having each bm has four vm running on it which are running elasticsearch service independently . For resources 18 Cores 28GB JVM and 125 GB total memory for each vm we have defined.

It is on-premise cloud we are using.
Vm technology is kvm which is isolated .
These are the only difference in two cluster's that are compared.
These are comparison results i got for 20%

Metric	Task	Baseline	Contender	Diff	Unit	Diff %
Cumulative indexing time of primary shards		2.57055	3.5960166666666700	1.02547	min	39.89%
Min cumulative indexing time across primary shard		2.57055	3.5960166666666700	1.02547	min	39.89%
Median cumulative indexing time across primary shard		2.57055	3.5960166666666700	1.02547	min	39.89%
Max cumulative indexing time across primary shard		2.57055	3.5960166666666700	1.02547	min	39.89%
Cumulative indexing throttle time of primary shards		0	0	0.00000	min	0.00%
Min cumulative indexing throttle time across primary shard		0	0	0.00000	min	0.00%
Median cumulative indexing throttle time across primary shard		0	0	0.00000	min	0.00%
Max cumulative indexing throttle time across primary shard		0	0	0.00000	min	0.00%
Cumulative merge time of primary shards		0.74375	0.6619166666666670	-0.08183	min	-11.00%
Cumulative merge count of primary shards		13	14	1.00000		7.69%
Min cumulative merge time across primary shard		0.74375	0.6619166666666670	-0.08183	min	-11.00%
Median cumulative merge time across primary shard		0.74375	0.6619166666666670	-0.08183	min	-11.00%
Max cumulative merge time across primary shard		0.74375	0.6619166666666670	-0.08183	min	-11.00%
Cumulative merge throttle time of primary shards		0.1944	0.09881666666666670	-0.09558	min	-49.17%
Min cumulative merge throttle time across primary shard		0.1944	0.09881666666666670	-0.09558	min	-49.17%
Median cumulative merge throttle time across primary shard		0.1944	0.09881666666666670	-0.09558	min	-49.17%
Max cumulative merge throttle time across primary shard		0.1944	0.09881666666666670	-0.09558	min	-49.17%
Cumulative refresh time of primary shards		0.09668333333333330	0.11038333333333300	0.01370	min	14.17%
Cumulative refresh count of primary shards		23	23	0.00000		0.00%
Min cumulative refresh time across primary shard		0.09668333333333330	0.11038333333333300	0.01370	min	14.17%
Median cumulative refresh time across primary shard		0.09668333333333330	0.11038333333333300	0.01370	min	14.17%
Max cumulative refresh time across primary shard		0.09668333333333330	0.11038333333333300	0.01370	min	14.17%
Cumulative flush time of primary shards		0.0288	0.04165	0.01285	min	44.62%
Cumulative flush count of primary shards		3	3	0.00000		0.00%
Min cumulative flush time across primary shard		0.0288	0.04165	0.01285	min	44.62%
Median cumulative flush time across primary shard		0.0288	0.04165	0.01285	min	44.62%
Max cumulative flush time across primary shard		0.0288	0.04165	0.01285	min	44.62%
Store size		0.48035527113825100	0.4853478614240890	0.00499	GB	1.04%
Translog size		5.12227416038513E-08	5.12227416038513E-08	0.00000	GB	0.00%
Heap used for segments		1.2517127990722700	1.054351806640630	-0.19736	MB	-15.77%
Heap used for doc values		1.0503044128418000	0.8636016845703130	-0.18670	MB	-17.78%
Heap used for terms		0.191925048828125	0.18182373046875	-0.01010	MB	-5.26%
Heap used for norms		0	0	0.00000	MB	0.00%
Heap used for points		0	0	0.00000	MB	0.00%
Heap used for stored fields		0.00948333740234375	0.0089263916015625	-0.00056	MB	-5.87%
Segment count		19	18	-1.00000		-5.26%
Min Throughput	index-append	5055.680170525830	2232.471709969610	-2823.20846	docs/s	-55.84%
Mean Throughput	index-append	33547.21822819360	25537.94223848830	-8009.27599	docs/s	-23.87%
Median Throughput	index-append	35962.65308350690	28565.268073863600	-7397.38501	docs/s	-20.57%
Max Throughput	index-append	40846.72420427810	31209.177231759300	-9637.54697	docs/s	-23.59%
50th percentile latency	index-append	1655.97854135558	2047.2177821211500	391.23924	ms	23.63%
90th percentile latency	index-append	2260.323222354060	4269.733674917370	2009.41045	ms	88.90%
99th percentile latency	index-append	3871.9402121566200	4832.788046048950	960.84783	ms	24.82%
100th percentile latency	index-append	3930.955079384150	4870.732283219700	939.77720	ms	23.91%
50th percentile service time	index-append	1655.97854135558	2047.2177821211500	391.23924	ms	23.63%
90th percentile service time	index-append	2260.323222354060	4269.733674917370	2009.41045	ms	88.90%
99th percentile service time	index-append	3871.9402121566200	4832.788046048950	960.84783	ms	24.82%
100th percentile service time	index-append	3930.955079384150	4870.732283219700	939.77720	ms	23.91%
error rate	index-append	0.0	0.0	0.00000	%	0.00%
Min Throughput	auto-date-histogram	2.013246363676130	2.010887917905770	-0.00236	ops/s	-0.12%
Mean Throughput	auto-date-histogram	2.0218153978254200	2.017956439331070	-0.00386	ops/s	-0.19%
Median Throughput	auto-date-histogram	2.019833594510600	2.016303985104790	-0.00353	ops/s	-0.17%
Max Throughput	auto-date-histogram	2.03919671002098	2.0322547923478900	-0.00694	ops/s	-0.34%
50th percentile latency	auto-date-histogram	4.6629272401332900	7.633305620402100	2.97038	ms	63.70%
90th percentile latency	auto-date-histogram	6.350460182875420	8.569605182856320	2.21914	ms	34.94%
99th percentile latency	auto-date-histogram	9.34921927750111	75.79779850319040	66.44858	ms	710.74%
100th percentile latency	auto-date-histogram	10.078045539557900	91.4257112890482	81.34767	ms	807.18%
50th percentile service time	auto-date-histogram	3.2136095687747	6.2840646132826800	3.07046	ms	95.55%
90th percentile service time	auto-date-histogram	4.9147924408316800	7.212052773684260	2.29726	ms	46.74%
99th percentile service time	auto-date-histogram	8.178022401407360	75.10196853429090	66.92395	ms	818.34%
100th percentile service time	auto-date-histogram	8.30864254385233	89.86552152782680	81.55688	ms	981.59%
error rate	auto-date-histogram	0.0	0.0	0.00000	%	0.00%
Min Throughput	auto-date-histogram-with-tz	2.0132490162748400	2.0131046705861000	-0.00014	ops/s	-0.01%
Mean Throughput	auto-date-histogram-with-tz	2.0218625456858700	2.02154542930756	-0.00032	ops/s	-0.02%
Median Throughput	auto-date-histogram-with-tz	2.0198735985089000	2.019587292399750	-0.00029	ops/s	-0.01%
Max Throughput	auto-date-histogram-with-tz	2.0392899662268000	2.0387750931540100	-0.00051	ops/s	-0.03%
50th percentile latency	auto-date-histogram-with-tz	4.934966564178470	7.338241208344700	2.40327	ms	48.70%
90th percentile latency	auto-date-histogram-with-tz	7.07501601427794	8.897534012794500	1.82252	ms	25.76%
99th percentile latency	auto-date-histogram-with-tz	9.346076250076300	113.57831541448800	104.23224	ms	1115.25%
100th percentile latency	auto-date-histogram-with-tz	9.545687586069110	123.09636175632500	113.55067	ms	1189.55%
50th percentile service time	auto-date-histogram-with-tz	3.4206281416118100	6.003329996019600	2.58270	ms	75.50%
90th percentile service time	auto-date-histogram-with-tz	4.563266132026930	7.340961787849670	2.77770	ms	60.87%
99th percentile service time	auto-date-histogram-with-tz	7.584436470642690	112.55704792216400	104.97261	ms	1384.05%
100th percentile service time	auto-date-histogram-with-tz	7.708322256803510	120.90897187590600	113.20065	ms	1468.55%
error rate	auto-date-histogram-with-tz	0.0	0.0	0.00000	%	0.00%
Min Throughput	date-histogram	2.013295240064480	2.01296730265028	-0.00033	ops/s	-0.02%
Mean Throughput	date-histogram	2.021910814076830	2.0213870338699800	-0.00052	ops/s	-0.03%
Median Throughput	date-histogram	2.0199214267787800	2.0194322297139200	-0.00049	ops/s	-0.02%
Max Throughput	date-histogram	2.039372576147970	2.0384784158027000	-0.00089	ops/s	-0.04%
50th percentile latency	date-histogram	4.628696478903290	7.107266690582040	2.47857	ms	53.55%
90th percentile latency	date-histogram	6.16298271343113	10.115254484117000	3.95227	ms	64.13%
99th percentile latency	date-histogram	8.914706641808150	18.01757547073180	9.10287	ms	102.11%
100th percentile latency	date-histogram	9.07046813517809	245.3456511721020	236.27518	ms	2604.88%
50th percentile service time	date-histogram	3.219965845346450	5.779359955340620	2.55939	ms	79.49%
90th percentile service time	date-histogram	4.669652972370400	7.879322674125430	3.20967	ms	68.73%
99th percentile service time	date-histogram	7.295800233259800	17.196148708463900	9.90035	ms	135.70%
100th percentile service time	date-histogram	7.296985015273090	244.15585678070800	236.85887	ms	3245.98%
error rate	date-histogram	0.0	0.0	0.00000	%	0.00%
Min Throughput	date-histogram-with-tz	2.0132879434784200	2.013204091664940	-0.00008	ops/s	0.00%
Mean Throughput	date-histogram-with-tz	2.0218947242621600	2.021739169771230	-0.00016	ops/s	-0.01%
Median Throughput	date-histogram-with-tz	2.0198892486835000	2.0197563768150500	-0.00013	ops/s	-0.01%
Max Throughput	date-histogram-with-tz	2.0393629907985900	2.039072459429080	-0.00029	ops/s	-0.01%
50th percentile latency	date-histogram-with-tz	4.6008918434381500	7.0557924918830400	2.45490	ms	53.36%
90th percentile latency	date-histogram-with-tz	6.863336265087140	9.54551361501217	2.68218	ms	39.08%
99th percentile latency	date-histogram-with-tz	8.963089315220710	14.450054764748000	5.48697	ms	61.22%
100th percentile latency	date-histogram-with-tz	9.017936885356900	87.76078466326000	78.74285	ms	873.18%
50th percentile service time	date-histogram-with-tz	3.1912624835968000	5.787764675915240	2.59650	ms	81.36%
90th percentile service time	date-histogram-with-tz	4.591219779104010	8.094062563031910	3.50284	ms	76.29%
99th percentile service time	date-histogram-with-tz	7.407350027933720	13.239026647061500	5.83168	ms	78.73%
100th percentile service time	date-histogram-with-tz	7.45718739926815	86.52933035045860	79.07214	ms	1060.35%
error rate	date-histogram-with-tz	0.0	0.0	0.00000	%	0.00%

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.