Capturing and Visualizing Threaded HTTP Requests

I'm using the APM Python Flask agent and visualizing results on Kibana. HTTP requests made on the main thread are captured but It looks like threaded HTTP requests are not captured.

  1. I'm wondering if capturing threaded requests can be enabled?
  2. If not, what would be the recommended way to capture these threaded request?
  3. How can parallel requests/spans be visualized on Kibana?

Hi @Alsheh

sorry for the long reply time, I had to experiment a bit to figure out a proper answer :slight_smile:

The issue at play here is that we store the currently active transaction in a threadlocal variable. This variable is then accessed wherever spans are created, and used to tie them all together. As the name implies, threadlocals are local to the thread and can't be accessed from another thread.

One way to get around this is to pass a reference to the current transaction to the other thread, and set it there as current transaction as well. The way you do this differs a bit depending on how exactly you use threads. Here's an example on how you would do it based on the ThreadPoolExecutor from the Python docs:

import concurrent.futures
import urllib.request

from elasticapm.traces import execution_context


URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout, transaction):
    execution_context.set_transaction(transaction)
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

@app.route("/process_urls")
def process_urls():
    # We can use a with statement to ensure threads are cleaned up promptly
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        # get current transaction from thread-local store
        transaction = execution_context.get_transaction()
        # Start the load operations and mark each future with its URL
        future_to_url = {executor.submit(load_url, url, 60, transaction): url for url in URLS}
        for future in concurrent.futures.as_completed(future_to_url):
            url = future_to_url[future]
            try:
                data = future.result()
            except Exception as exc:
                print('%r generated an exception: %s' % (url, exc))
            else:
                print('%r page is %d bytes' % (url, len(data)))
    return "OK"

The APM UI is able to visualize this without issues, as it is built to handle languages/environments like NodeJS that use concurrency heavily.

Note however that this is relatively untested territory, and the execution_context functions are undocumented and as such could change without prior warning in an upcoming release.

We do consider how we can better support threads out of the box, so if you have any input, any feedback is welcome!

Thanks for the thorough response, @beniwohli! Your solutions worked and I was able to visualize the parallel requests on Kibana! I am looking forward to supporting threads out of the box.

That's great to hear!

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.