Forcing specific transactions to be sampled

Hi,

I use Transaction Sampling configured to 0.1 as described here: Core configuration options | APM .NET Agent Reference [1.12] | Elastic

10% of transactions are recorded which is fine. I would like, however, to skip sampling configuration for some methods and always receive all details. In other words during the transaction I would like to decide and override the settings so specific transaction is always sampled.

Is this achievable in the .net APM agents (version 1.11 and 1.15 for .net core and 1.7 for .net framework)?

Hi @kbalys,

there is no config which would achieve what your described, but you could utilize the Filter API and with that you can achieve such functionality.

However, this will increase the overhead of the agent. Nevertheless I explain how it works, since it may work for you.

Filters run before events are sent to APM server and you have access to the event which get sent, plus you can also decide if a specific event should not be sent to APM Server - detailed docs about it linked above.

So you can do something like this for transactions:

Agent.AddFilter((ITransaction transaction) =>
{
	if(transaction.Context.Request.Url.Full == urlToSkip; //check if you want to drop it - you can check on any other property
		return null; //return `null` to drop
	else
		return transaction; //otherwise just return the transaction and it'll be sent
});

and the same for spans:

Agent.AddFilter((ISpan span) =>
{
	if (span.Name == spanNameToDrop) //you can check on any property
		return null;
	else
		return span;
});

So with this, you can basically implement your own sampling logic, and with that you can implement what you described above.

Here is the downside: In order to receive a filter for a specific event (e.g. transaction or span), it needs to be captured - so if a span would be captured within a filtered transaction, you won't get a callback for it (because it's dropped). So the first step to use this would be to set the sample rate to maximum (1.0 - meaning capture everything) and implement the filtering logic in the callbacks.

This will mean that the agent will capture everything (overhead of capturing things) and also store all events (increased memory overhead). On the upside, filters run before serialization, so once you return null for a specific event, then there won't be any additional overhead: specifically you save the serialization and network overhead.

Thanks @GregKalapos for very detailed answer. I think I can achieve anything with the solution you proposed.

As a temporary solution I managed to implement a workaround by calling the service with traceparent header with sampling set to true, so my endpoint in question is always sampled.

For example:
traceparent: 00-4e8734e583c6a94fa83a0279eae73755-7fcc41ff50f80545-01

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.