currently the latency is calcaluted by marvel is not fit with the valued we calculated from client side. so we'd like to konw how it calculated by marvel
I recently explained the calculation for the latency over at:
The gist is that we calculate latency by pulling the
_nodes/stats API, and fetching the total time spent indexing and the number of indexing operations, as well as the same for query (search). Those operations are per shard to be clear. We do this every
10s, so it slowly grows over time by adding to the initial value since node uptime.
So, to chart it, we take the derivative for each time-bucket (each X-axis slice) that we intend to chart, then divide those derivatives to get the latency.
Keep in mind that your client side latency is going to have more data (the full network round trip for starters), so it is inherently a little more accurate.
Hope that helps,