Transaction sample rate has no effect on disk size

If you are asking about a problem you are experiencing, please use the following template, as it will help us help you. If you have a different problem, please delete all of this text :slight_smile:

Kibana version : 7.5.2

Elasticsearch version : 7.5.2

APM Server version : 7.6.0

APM Agent language and version : Java 1.12.0

Browser version:

Original install method (e.g. download page, yum, deb, from source, etc.) and version: kubernetes (elastic operator and elastic-apm helm chart)

We are outputing directly to elasticsearch

Is there anything special in your setup? We have a load balancer in front of all APM pods

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

Two issues:

  1. We changed the transaction_sample_rate from 1 to 0.05. This has no effect on the disk space, I do see that some transactions have no samples in the APM UI. APM metrics are taking too much of disk space, around 2.5GB every minute. This is not scalable for us. Let us know if we are doing anything wrong with the parameter "transaction_sample_rate"

  2. We are also seeing HTTP body being captured when the default value clearly says it should be OFF.


Metrics for APM and its index usage
Screen Shot 2020-03-10 at 1.38.45 PM|690x345

  • thank you

Is this just a typo here, or is that what you set in your application config? Did you configure the agents through Kibana, or in your Kubernetes spec?

I ask because the correct name for this configuration is transaction_sample_rate ("sample", not "sampling").

Regarding capture_body, there's a note about this in the docs:

If the HTTP request or the JMS message has a body and this setting is disabled, the body will be shown as [REDACTED].


Thank you for your reply, yes that is a typo here, this is the exact parameter on the server "-Delastic.apm.transaction_sample_rate=0.05".

Also, thanks for clarifying about the HTTP body

sorry for the trouble, I forgot to update the parameters on most of the servers which were sending all the samples. After re-building all the servers with the latest parameters the issue was resolved

1 Like

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.