Span duration of an HTTP GET calculated wrongly?

The duration of HTTP Client spans is the time to first byte.
This has several reasons:

  • From instrumentation perspective we have no guarantee whether the client will actually consume the response body: If it doesn't we would end up with a started but never stopped Span
  • For many HTTP-clients it is very difficult to find an instrumentation hook for when the response has been consumed
  • The duration until the response body has been fully consumed is actually also heavily dependent on the client/consumer: Because the data is streamed, the consumer is a big factor in the pace until the response has been fully consumed. E.g. if you issue DB calls (or do something else slowly) synchronously while streaming the response data you'd also end up with a duration which has nothing todo with the speed of the server or network.

If you are interested in the total time until the response has been fully received and processed, you need to do exactly what you suggested: manually instrument a method which finished after the response has been consumed.