If you are asking about a problem you are experiencing, please use the following template, as it will help us help you. If you have a different problem, please delete all of this text
Kibana version: 7.2
Elasticsearch version: 7.2
APM Server version: 7.2
APM Agent language and version: N/A
Original install method (e.g. download page, yum, deb, from source, etc.) and version: Kubernetes - official docker images
Fresh install or upgraded from other version? Fresh
Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
I've tried to enable the self instrumentation of APM Server, due to a bottleneck I'm experiencing when ingesting at a high rate.
in the config, but it doesn't seem to be taking effect, as per logs:
2019-09-25T11:21:16.678Z INFO [beater] beater/beater.go:253 self instrumentation is disabled
Is there any additional configuration necessary?
I've tried setting things like:
but still no success
Hi, I can't see anything wrong with it. Can you share the whole config?
Otherwise, you maybe can try executing apm-server like
apm-server -e -E apm-server.instrumentation.enabled=true
Can you give some details about the bottleneck you are experiencing?
If i manually specify the command as you asked, it seems to work.
I'm not too sure about the bottleneck, and thus the need for self instrumentation. When we reach around 10,000 requests a minute to apm server, we start seeing errors/timeouts from agents. Scaling out additional apm server instances doesn't help. The elastic search stack doesn't seem to be under too much pressure. Just feeling around in the dark at the moment
instrumentation should be nested inside apm-server, so
That did the trick ... thanks. I feel pretty stupid now.
This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.