But I can't edit trace_methods with remote configuration in Kibana. Do you know why this is?
I've tried the elasticapm.properties configuration (and place it in the same folder as the agent) as well as JVM arguments when starting the application.
I also have a second question: is there any data available about what the overhead would be if my trace_methods config would by [domain].[company].* and trace_methods_duration_threshold 100 ms?
The documentation warns for high overhead "The agent will create stack traces for spans which took longer than span_frames_min_duration. When tracing a large number of methods (for example by using wildcards), this may lead to high overhead. Consider increasing the threshold or disabling stack trace collection altogether" and I was curious if there is also a high overhead if the trace_methods_duration_threshold is set to a high time treshold.
Unfortunately I can't use the async profiler since we use Windows network shares.
Does changing the value in elasticapm.properties work for you? Note that we reload this file every 30s.
is there any data available about what the overhead would be
It really depends on your application. If you accidentally instrument a method that usually takes a few nanoseconds and that's called millions of times per request, you'll get a significant overhead. The trace_methods_duration_threshold option does not fully get rid of all the overhead as the agent still needs to record the span to see if it's faster or slower than the threshold.
Unfortunately I can't use the async profiler since we use Windows network shares.
I'm responding in this topic to the bugfix for the windows network share issue as the previous topic is still locked for me. Unfortunately the bug fix did not work for me I still get: Caused by: java.io.IOException: Input/output error\n\tat java.io.FileOutputStream.close0(Native Method)\n\tat java.io.FileOutputStream.access$000(FileOutputStream.java:53)\n\tat java.io.FileOutputStream$1.close(FileOutputStream.java:356)\n\tat java.io.FileDescriptor.closeAll(FileDescriptor.java:212)\n\tat java.io.FileOutputStream.close(FileOutputStream.java:354)
I use Oracle JDK 8 maybe that is the reason (as my error was also a bit different than the OpenJDK issue that was in the github issue).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.