Java agent warns of improper deactivation of span

Kibana/Elasticsearch/APMServer: 6.6.1
APM Agent: Java@1.4.0
Install/setup: javaagent loaded via VMOptions:

-javaagent:/home/<>/elastic-apm-agent-1.4.0.jar
-Delastic.apm.service_name=front-end
-Delastic.apm.server_url=http://localhost:8200
-Delastic.apm.application_packages=br.com

compile 'co.elastic.apm:apm-agent-api:1.4.0'
Fresh or upgraded? Fresh
**Anything special?**Nop, all defaults.
[Bonus point] OS info
Linux 4.4.0-142-generic #168-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux

First possible issue

The applications run without docker, and I get these INFO logs when I run anything:

INFO ApmServerHealthChecker - Elastic APM server is available: {"ok":{"build_date":"...","build_sha":"...","version":"6.6.1"}}
INFO SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 11:pids:/user.slice/user-1000.slice
INFO SystemInfo - || '/proc/self/cgroup' line: 10:blkio:/user.slice
INFO SystemInfo - || '/proc/self/cgroup' line: 9:memory:/user.slice
INFO SystemInfo - || '/proc/self/cgroup' line: 8:cpuset:/
INFO SystemInfo - || '/proc/self/cgroup' line: 7:cpu,cpuacct:/user.slice
INFO SystemInfo - || '/proc/self/cgroup' line: 6:net_cls,net_prio:/
INFO SystemInfo - || '/proc/self/cgroup' line: 5:freezer:/
INFO SystemInfo - || '/proc/self/cgroup' line: 4:perf_event:/
INFO SystemInfo - || '/proc/self/cgroup' line: 3:devices:/user.slice
INFO SystemInfo - || '/proc/self/cgroup' line: 2:hugetlb:/
INFO SystemInfo - || '/proc/self/cgroup' line: 1:name=systemd:/user.slice/user-1000.slice/session-c2.scope
INFO StartupInfo - Starting Elastic APM 1.4.0 as front-end on Java 1.7.0_141 (Azul Systems, Inc.) Linux 4.4.0-142-generic

Scenario

  • 2 grails@2.2.4 apps (simple java servlet for all practical effects): front-end and jobs
  • they talk to each other using a custom rest/client lib that uses Apache HTTP async

Custom Apache HTTP Async

  • I had to implement custom span creation and trace propagation (async isn't supported yet):
final Span spanForHttpCall = ElasticApm.currentSpan().startSpan("ext", "", "")
spanForHttpCall.activate()
try {
	/*... builds an async request using Apache HTTP, and runs it on an executorService, effectivelly making it sync/blocking ...*/
	return result
} catch (final Throwable errorFromTask) {
	spanForHttpCall.captureException(errorFromTask)
	throw errorFromTask
} finally {
	spanForHttpCall.end()
}
  • inside the creation/setup of the request, I do the following:
final Span span = ElasticApm.currentSpan()
span.setName("method+path, basically")

/*...*/

final HeaderInjector headerInjector = { final String headerName, final String headerValue ->
	req.addHeader(headerName, headerValue)
} as HeaderInjector

span.injectTraceHeaders(headerInjector)

Custom transaction name

  • I also had to create a servlet filter to make the transaction properly named

Current apparent situation

Everything seems to be working if I don't check logs:

  • apm-server/elastic/kibana are fine

This is an example of "fine" transaction/trace:

Problem

The agent is spitting warnings very often (~0.5-1.0 warnings per request), telling me that:

WARN co.elastic.apm.agent.impl.ElasticApmTracer - Deactivating a span (...) which is not the currently active span (...). This can happen when not properly deactivating a previous span.

I setup a few prints and I got the following, which seems like a pattern:

/*This was a request #1 I did*/
[t8] BEFORE span creation+activation
	currentSpan=SpanImpl@3afd1618::<SpanImpl@4e4c4550 span='SimpleGrailsController' 00-906...-aac...-01>
[t8] AFTER  span creation+activation
	currentSpan=SpanImpl@4bff69db::<SpanImpl@35b23fda span='' 00-906...-3de...-01>
	spanForHttpCall=SpanImpl@497e1cc0::<SpanImpl@497e1cc0 span='' 00-906...-3de...-01>
[t8] BEFORE span.end()
	currentSpan=SpanImpl@14b1c74a::<SpanImpl@5c15b359 span='Tarefa[ GET :: tarefa/summary ]' 00-906...-3de...-01>
[t8] AFTER  span.end()
	currentSpan=SpanImpl@6f51fdcc::<SpanImpl@4220ad4e span='Tarefa[ GET :: tarefa/summary ]' 00-906...-3de...-01>

/*This was a request #2 I did, ~6 seconds after request #1*/
[t3] BEFORE span creation+activation
	currentSpan=SpanImpl@15fb8e5d::<SpanImpl@5f795213 span='SimpleGrailsController' 00-774...-e43...-01>
[t3] AFTER  span creation+activation
	currentSpan=SpanImpl@5af95eef::<SpanImpl@363f86f1 span='' 00-774...-f93...-01>
	spanForHttpCall=SpanImpl@4b4540a::<SpanImpl@4b4540a span='' 00-774...-f93...-01>
[t3] BEFORE span.end()
	currentSpan=SpanImpl@c117380::<SpanImpl@6ac09f29 span='Tarefa[ GET :: tarefa/summary ]' 00-774...-f93...-01>
[t3] AFTER  span.end()
	currentSpan=SpanImpl@5f2f1cab::<SpanImpl@4db5deb0 span='Tarefa[ GET :: tarefa/summary ]' 00-774...-f93...-01>

2019-03-14 19:18:03.151 [t3] WARN co.elastic.apm.agent.impl.ElasticApmTracer - Deactivating a span (00-0*32-aac...-00) which is not the currently active span ('' 00-0*32-e43...-00). This can happen when not properly deactivating a previous span.

Interesting detail and a question: why does the ElasticApm.currentSpan() returns a different instance for every invocation, even though it's still the same?

Note that, the warning mentioned trying to deactivate/end a span aac... (which was the parent span for the first request) when the active span was instead e43... (which was the parent span for the second request).
This seems like a delayed deactivation, that was even able to happen between different threads.

The warning/issue only happens if I have the custom span for Apache HTTP async.
The warning/issue also happens if I create the custom span with ElasticApm.currentTransaction().startSpan instead of ElasticApm.currentSpan().startSpan.

The effect of this delayed deactivation is the following::
Request number 1:


Request number 2:

Note that, the first transaction doesn't know about it's render span at all.
I could always disable instrumentation and create the transaction myself: I think I would be able to setup all spans I want properly. However, I'd like to use as much automatic instrumentation as possible, and I'm trying to make the supported technologies and my frameworks coexist well.

I could provide more information about a scenario like that (like the raw docs), or more testing, if needed.
I have to say that I'm still amazed at how much quality and benefit I was able to achieve with such little effort. You guys are all amazing, and I'm glad I'm having the chance to work with Elastic APM.

Also, I had to remove a lot of stuff from the original thread description, as I found that there's a limit of 7k chars for it (logs and code were simplified a lot, but it should still be readable). Let me know if anything seems out of place, I can provide the full version for each section.

I found this issue which seems to have some relation to the main issue here: https://github.com/elastic/apm-agent-java/issues/421
Am I right in thinking so?

Also, the race condition I mentioned/observed is very severe if I try to spam the front-end application with requests: a bunch of spans/transactions from all over the place end up grouped inside the same parent transaction/trace. Maybe there's some relation between all these scenarios.

News:

  • Using elastic.apm.disable_instrumentations=render does not fix the issue. It just shifts the delayed deactivation to another faulty span.
  • I can't mimic all the behaviour/instrumentation I want by hand, because there's no current way to attach request/response info to the transaction via API. Nevertheless, was able indeed to recreate (without these information) all spans/transactions I wanted manually with the API, without triggering the delayed deactivation issue.

Good(?) news:

  • Using elastic.apm.disable_instrumentations=concurrent,executor partially fixes the issue.

Spans are not deactivated with delay anymore, the warning is gone, but I still have some
_render_ spans not being registered
.

Maybe those two are different issues?

  • some render spans are not being captured properly
  • concurrent,executor instrumentations are causing span references to be left around and manipulated on other threads/transactions

Something else I noticed, is that everytime the issue (delayed deactivation) happened, the first part of the internal id (I can see it when I .dump() the span/transaction object in groovy):

co.elastic.apm.api.SpanImpl@39eec638 span='SimpleGrailsController' 00-6e5df9c349ec25a7cb1f13e3f538def9-ca62deb08bf83a8a-01

which in this example is 6e5df9c349ec25a7cb1f13e3f538def9 (and corresponds the trace.id on the raw doc on elastic) is all zeroes, like in:

co.elastic.apm.api.SpanImpl@433b89cc span='' 00-00000000000000000000000000000000-f57c7c7a7f92ad90-00

and the span name is always an empty string too.

I can reproduce this specific pattern when I press F5 on the front-end service twice (without waiting for the first one to finish). Whenever I do this, a new issue comes into play: pretty much all future transactions with the same name (or some other similar characteristics) will be stuck on this ghost transaction and won't appear on elastic/kibana at all (they seem to be arriving at the apm-server though, based on the logs, but it could be just metrics).

/*around here I did a quick double-F5, and the second one got into ghost-mode*/
/*note that the transaction.id leaks from one to the other*/
[2019-03-15T21:29:59.856-0300] currentSpan=co.elastic.apm.api.SpanImpl@6d2c961d::<co.elastic.apm.api.SpanImpl@321e736f span='SimpleGrailsController' 00-8b67581e24992dc27b0e0471a5e5c5ca-2db20472856b83d8-01>
[2019-03-15T21:30:00.399-0300] currentSpan=co.elastic.apm.api.SpanImpl@2947804c::<co.elastic.apm.api.SpanImpl@76e0ee75 span='' 00-00000000000000000000000000000000-2db20472856b83d8-00>

/*each one of these logs is one F5 after it was triggered, 12/15 ghost ones with no trace.id, and a lot of duplicate/leaked transaction.ids */
[2019-03-15T21:30:04.004-0300] currentSpan=co.elastic.apm.api.SpanImpl@3de838ea::<co.elastic.apm.api.SpanImpl@3462bb14 span='SimpleGrailsController' 00-d13d0064f47715d97ab4ce737d71253e-d9c2c8baa610f97a-01>
[2019-03-15T21:30:09.480-0300] currentSpan=co.elastic.apm.api.SpanImpl@40555a5e::<co.elastic.apm.api.SpanImpl@493cf282 span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:13.715-0300] currentSpan=co.elastic.apm.api.SpanImpl@61417412::<co.elastic.apm.api.SpanImpl@5216f548 span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:16.085-0300] currentSpan=co.elastic.apm.api.SpanImpl@4a706651::<co.elastic.apm.api.SpanImpl@440f546f span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:18.613-0300] currentSpan=co.elastic.apm.api.SpanImpl@2674300::<co.elastic.apm.api.SpanImpl@1f7425f2 span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:20.714-0300] currentSpan=co.elastic.apm.api.SpanImpl@3e84a30c::<co.elastic.apm.api.SpanImpl@52b84049 span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:22.301-0300] currentSpan=co.elastic.apm.api.SpanImpl@3fc080ca::<co.elastic.apm.api.SpanImpl@58e46a7 span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:22.756-0300] currentSpan=co.elastic.apm.api.SpanImpl@36bb420b::<co.elastic.apm.api.SpanImpl@14e0dcba span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:26.279-0300] currentSpan=co.elastic.apm.api.SpanImpl@60f0f959::<co.elastic.apm.api.SpanImpl@2caa4e27 span='' 00-00000000000000000000000000000000-d9c2c8baa610f97a-00>
[2019-03-15T21:30:26.438-0300] currentSpan=co.elastic.apm.api.SpanImpl@5a777ed5::<co.elastic.apm.api.SpanImpl@4f14e445 span='SimpleGrailsController' 00-a1da5e823e3be5b560fbe29d1e5c1a10-e2eccdd0f465fea5-01>
[2019-03-15T21:30:26.641-0300] currentSpan=co.elastic.apm.api.SpanImpl@234c8568::<co.elastic.apm.api.SpanImpl@30e9475e span='' 00-00000000000000000000000000000000-e2eccdd0f465fea5-00>
[2019-03-15T21:30:27.017-0300] currentSpan=co.elastic.apm.api.SpanImpl@126c400a::<co.elastic.apm.api.SpanImpl@760ff824 span='' 00-00000000000000000000000000000000-e2eccdd0f465fea5-00>
[2019-03-15T21:30:27.108-0300] currentSpan=co.elastic.apm.api.SpanImpl@1761cba2::<co.elastic.apm.api.SpanImpl@74b11af8 span='SimpleGrailsController' 00-21f701730133903e69d13316d8aac1e0-9af48d079a2699e1-01>
[2019-03-15T21:32:15.840-0300] currentSpan=co.elastic.apm.api.SpanImpl@28d01e71::<co.elastic.apm.api.SpanImpl@5dcea18c span='' 00-00000000000000000000000000000000-9af48d079a2699e1-00>
[2019-03-15T21:34:24.377-0300] currentSpan=co.elastic.apm.api.SpanImpl@79dffda9::<co.elastic.apm.api.SpanImpl@4573710 span='' 00-00000000000000000000000000000000-9af48d079a2699e1-00>

I have reproduced this behavior (ghost transactions stuck "forever") regardless of my custom Apache HTTP async, custom span creation or render instrumentation. Leaving only public-api and servlet-api (for the simplest shell of a transaction and custom spans) still gave me a messed up scenario from time to time.
I tried setting up log_level=TRACE and was able to see that a correct transaction has logs all over specifying when it starts/ends, but a ghost one doesn't (it only has span starts/ends).

Please let me know if there's anything else I can provide to help understand (and hopefully fix) this.

Hi and thanks a lot for the great feedback and the detailed description :slight_smile:!

I would say that many of the symptoms you describe seem to be related to incorrect management of span/transaction lifecycle. For example, getting empty span name and all zeros may be due to using an already recycled span/transaction.
Indeed, your code seems to activate the transaction on the thread without deactivating it. You either need to keep the Scope instance returned by invoking activate() and manually close() it, or use it within a try-with-resources statement, as explained in the relevant documentation.

I would also advice that you carefully choose where you end the async Apache client spans- make sure it is only ended once and by the correct thread.
Note that we already have an issue to properly trace the HTTP client async and our intentions are to get to it pretty soon, so you may want to wait for it.

I hope this is helpful,
Eyal.

I might have created some confusion by touching on so many scenarios along the way on this thread, let me try and clear some of it up.

As I tried to demonstrate on my last comment, the scenario where repeated requests are making further transactions get stuck in a no-trace and no-lifecycle loop happens even if I have zero custom code for span creation. I'm able to reproducce it on a uri that doesn't even touch the inner Apache HTTP async client.

As for your suspicion that I may be mishandling transactions/spans on my code, I'm already dealing with it on a try/catch/finally (as per my original description). And it's not like the spanForHttpCall.end() isn't getting executed: it certainly is, for every request, safely. I'm calling .end() after I'm completely done with the inner workings of the Apache HTTP client.

Regarding the possibility that I may be closing the span on a different thread that started it, I don't think that's even possible: the code between those two points is 100% blocking and synchronous. There's code that manipulates the currentSpan inside an async block of code, but that code only sets the name and the headerInjectors, which should never justify this behavior.

The original description for the OP has a few more details that can prove that the threads are correct:

/*This was a request #1 I did*/
[2019-03-14T19:17:56.318-0300] [http-bio-8051-exec-8] BEFORE span creation+activation
	currentSpan=co.elastic.apm.api.SpanImpl@3afd1618::<co.elastic.apm.api.SpanImpl@4e4c4550 span='SimpleGrailsController' 00-906a999514a1c981c30fdb2240ecaae5-aac4b1da4db886ef-01>
[2019-03-14T19:17:56.330-0300] [http-bio-8051-exec-8] AFTER  span creation+activation
	currentSpan=co.elastic.apm.api.SpanImpl@4bff69db::<co.elastic.apm.api.SpanImpl@35b23fda span='' 00-906a999514a1c981c30fdb2240ecaae5-3dee797d2195d344-01>
	spanForHttpCall=co.elastic.apm.api.SpanImpl@497e1cc0::<co.elastic.apm.api.SpanImpl@497e1cc0 span='' 00-906a999514a1c981c30fdb2240ecaae5-3dee797d2195d344-01>
[2019-03-14T19:17:56.438-0300] [http-bio-8051-exec-8] BEFORE span.end()
	currentSpan=co.elastic.apm.api.SpanImpl@14b1c74a::<co.elastic.apm.api.SpanImpl@5c15b359 span='Tarefa[ GET :: tarefa/summary ]' 00-906a999514a1c981c30fdb2240ecaae5-3dee797d2195d344-01>
	spanForHttpCall=co.elastic.apm.api.SpanImpl@497e1cc0::<co.elastic.apm.api.SpanImpl@497e1cc0 span='Tarefa[ GET :: tarefa/summary ]' 00-906a999514a1c981c30fdb2240ecaae5-3dee797d2195d344-01>
[2019-03-14T19:17:56.440-0300] [http-bio-8051-exec-8] AFTER  span.end()
	currentSpan=co.elastic.apm.api.SpanImpl@6f51fdcc::<co.elastic.apm.api.SpanImpl@4220ad4e span='Tarefa[ GET :: tarefa/summary ]' 00-906a999514a1c981c30fdb2240ecaae5-3dee797d2195d344-01>
	spanForHttpCall=co.elastic.apm.api.SpanImpl@497e1cc0::<co.elastic.apm.api.SpanImpl@497e1cc0 span='Tarefa[ GET :: tarefa/summary ]' 00-906a999514a1c981c30fdb2240ecaae5-3dee797d2195d344-01>

/*This was a request #2 I did, ~6 seconds after request #1*/
[2019-03-14T19:18:02.903-0300] [http-bio-8051-exec-3] BEFORE span creation+activation
	currentSpan=co.elastic.apm.api.SpanImpl@15fb8e5d::<co.elastic.apm.api.SpanImpl@5f795213 span='SimpleGrailsController' 00-774f35bc2c28bc055d9e791b94e80ed4-e436649c37efc8d3-01>
[2019-03-14T19:18:02.903-0300] [http-bio-8051-exec-3] AFTER  span creation+activation
	currentSpan=co.elastic.apm.api.SpanImpl@5af95eef::<co.elastic.apm.api.SpanImpl@363f86f1 span='' 00-774f35bc2c28bc055d9e791b94e80ed4-f93b641708cc639f-01>
	spanForHttpCall=co.elastic.apm.api.SpanImpl@4b4540a::<co.elastic.apm.api.SpanImpl@4b4540a span='' 00-774f35bc2c28bc055d9e791b94e80ed4-f93b641708cc639f-01>
[2019-03-14T19:18:02.954-0300] [http-bio-8051-exec-3] BEFORE span.end()
	currentSpan=co.elastic.apm.api.SpanImpl@c117380::<co.elastic.apm.api.SpanImpl@6ac09f29 span='Tarefa[ GET :: tarefa/summary ]' 00-774f35bc2c28bc055d9e791b94e80ed4-f93b641708cc639f-01>
	spanForHttpCall=co.elastic.apm.api.SpanImpl@4b4540a::<co.elastic.apm.api.SpanImpl@4b4540a span='Tarefa[ GET :: tarefa/summary ]' 00-774f35bc2c28bc055d9e791b94e80ed4-f93b641708cc639f-01>
[2019-03-14T19:18:02.955-0300] [http-bio-8051-exec-3] AFTER  span.end()
	currentSpan=co.elastic.apm.api.SpanImpl@5f2f1cab::<co.elastic.apm.api.SpanImpl@4db5deb0 span='Tarefa[ GET :: tarefa/summary ]' 00-774f35bc2c28bc055d9e791b94e80ed4-f93b641708cc639f-01>
	spanForHttpCall=co.elastic.apm.api.SpanImpl@4b4540a::<co.elastic.apm.api.SpanImpl@4b4540a span='Tarefa[ GET :: tarefa/summary ]' 00-774f35bc2c28bc055d9e791b94e80ed4-f93b641708cc639f-01>

2019-03-14 19:18:03.151 [http-bio-8051-exec-3] WARN co.elastic.apm.agent.impl.ElasticApmTracer - Deactivating a span (00-00000000000000000000000000000000-aac4b1da4db886ef-00) which is not the currently active span ('' 00-00000000000000000000000000000000-e436649c37efc8d3-00). This can happen when not properly deactivating a previous span.

See that those two requests happen on different tomcat threads, and each one of them completes the lifecycle properly, as far as I could see.

Regarding native HTTP client async support, I had already seen the issue (I went through all issues on the repo, btw, hehe) and I'm indeed looking forward to it. I don't know if I'd be able to wait for it, though, but that greatly depends on when it would be available. Do you have any news regarding that?

Thank you for taking the time to read all of this, @Eyal_Koren.

Great news:

Your suggestion to save the activation scope and make sure it's closed fixed all the issues I had so far. I think the docs somehow made me think that span.deactivate() was the only thing needed to complete the lifecycle, but I can see now that I was the one at fault by not reading it properly.

Thanks again for the time and suggestion, @Eyal_Koren.

The only thing left unanswered is whether the following logs are a sign of issue or not (I get them everytime I run a java7 process with the agent attached outside a docker environment):

2019-03-14 19:07:40.393 [apm-server-healthcheck] INFO co.elastic.apm.agent.report.ApmServerHealthChecker - Elastic APM server is available: {"ok":{"build_date":"2019-02-13T16:05:12Z","build_sha":"a09dc505aa8ee1348712a4ece6c354083b408ef7","version":"6.6.1"}}
2019-03-14 19:07:40.401 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 11:pids:/user.slice/user-1000.slice
2019-03-14 19:07:40.401 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 10:blkio:/user.slice
2019-03-14 19:07:40.401 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 9:memory:/user.slice
2019-03-14 19:07:40.401 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 8:cpuset:/
2019-03-14 19:07:40.402 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 7:cpu,cpuacct:/user.slice
2019-03-14 19:07:40.402 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 6:net_cls,net_prio:/
2019-03-14 19:07:40.402 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 5:freezer:/
2019-03-14 19:07:40.402 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 4:perf_event:/
2019-03-14 19:07:40.402 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 3:devices:/user.slice
2019-03-14 19:07:40.402 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 2:hugetlb:/
2019-03-14 19:07:40.403 [main] INFO co.elastic.apm.agent.impl.payload.SystemInfo - Could not parse container ID from '/proc/self/cgroup' line: 1:name=systemd:/user.slice/user-1000.slice/session-c2.scope
2019-03-14 19:07:40.414 [main] INFO co.elastic.apm.agent.configuration.StartupInfo - Starting Elastic APM 1.4.0 as front-end on Java 1.7.0_141 (Azul Systems, Inc.) Linux 4.4.0-142-generic

This is unrelated to the main issues I brought up, so let me know if I should instead create a new thread here or a new issue on github.

1 Like

@fredericogalvao thanks for updating, I am glad it worked out :slight_smile:

Regarding these logs- they are related to docker container ID parsing from the /proc/self/cgroup file lines, which is why you get them when running outside a docker. I should probably write them only on DEBUG level and not INFO.

As for the Apache async client- I can't really guarantee, but should be quite soon. You can register to get release notifications on our github repo.

Thanks and good luck with our APM solution,
Eyal.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.