If you are asking about a problem you are experiencing, please use the following template, as it will help us help you. If you have a different problem, please delete all of this text
Kibana version: 7.2
Elasticsearch version: 7.2
APM Server version: 7.2
APM Agent language and version: N/A
Original install method (e.g. download page, yum, deb, from source, etc.) and version: Kubernetes - official docker images
Fresh install or upgraded from other version? Fresh
Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
I've tried to enable the self instrumentation of APM Server, due to a bottleneck I'm experiencing when ingesting at a high rate.
I've set
instrumentation:
enabled: true
in the config, but it doesn't seem to be taking effect, as per logs:
2019-09-25T11:21:16.678Z INFO [beater] beater/beater.go:253 self instrumentation is disabled
Is there any additional configuration necessary?
I've tried setting things like:
If i manually specify the command as you asked, it seems to work.
I'm not too sure about the bottleneck, and thus the need for self instrumentation. When we reach around 10,000 requests a minute to apm server, we start seeing errors/timeouts from agents. Scaling out additional apm server instances doesn't help. The elastic search stack doesn't seem to be under too much pressure. Just feeling around in the dark at the moment
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.