Describe the bug
OTEL Collector not able to connect to elastic-apm
Steps to reproduce
Please follow the instructions here, it is simple App
What did you expect to see?
i'm expecting to see the tracing logs to be sent both to zipkin and elastic-apm server.
What did you see instead?
otel-collector | 2022-07-29T14:39:44.207Z error exporterhelper/queued_retry.go:149 Exporting failed. Try enabling retry_on_failure config option to retry on retryable errors {"kind": "exporter", "data_type": "traces", "name": "elastic", "error": "sending event request failed: Post \"http://elastic-apm:8200/intake/v2/events\": dial tcp 192.168.32.6:8200: connect: connection refused", "errorVerbose": "Post \"http://elastic-apm:8200/intake/v2/events\": dial tcp 192.168.32.6:8200: connect: connection refused\nsending event request failed\ngo.elastic.co/apm/transport.(*HTTPTransport).sendStreamRequest\n\tgo.elastic.co/apm@v1.15.0/transport/http.go:292\ngo.elastic.co/apm/transport.(*HTTPTransport).SendStream\n\tgo.elastic.co/apm@v1.15.0/transport/http.go:282\ngithub.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticexporter.(*elasticExporter).sendEvents\n\tgithub.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticexporter@v0.56.0/exporter.go:197\ngithub.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticexporter.(*elasticExporter).ExportResourceSpans\n\tgithub.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticexporter@v0.56.0/exporter.go:149\ngithub.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticexporter.newElasticTracesExporter.func1\n\tgithub.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticexporter@v0.56.0/exporter.go:53\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*tracesRequest).export\n\tgo.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/traces.go:70\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send\n\tgo.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/common.go:225\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send\n\tgo.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/queued_retry.go:147\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send\n\tgo.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/traces.go:134\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send\n\tgo.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/queued_retry.go:83\ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func2\n\tgo.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/traces.go:113\ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces\n\tgo.opentelemetry.io/collector@v0.56.0/consumer/traces.go:36\ngo.opentelemetry.io/collector/service/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces\n\tgo.opentelemetry.io/collector@v0.56.0/service/internal/fanoutconsumer/traces.go:75\ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export\n\tgo.opentelemetry.io/collector@v0.56.0/processor/batchprocessor/batch_processor.go:262\ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems\n\tgo.opentelemetry.io/collector@v0.56.0/processor/batchprocessor/batch_processor.go:176\ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle\n\tgo.opentelemetry.io/collector@v0.56.0/processor/batchprocessor/batch_processor.go:143\nruntime.goexit\n\truntime/asm_amd64.s:1571"}
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/queued_retry.go:149
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/traces.go:134
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/queued_retry.go:83
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func2
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/traces.go:113
otel-collector | go.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces
otel-collector | go.opentelemetry.io/collector@v0.56.0/consumer/traces.go:36
otel-collector | go.opentelemetry.io/collector/service/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces
otel-collector | go.opentelemetry.io/collector@v0.56.0/service/internal/fanoutconsumer/traces.go:75
otel-collector | go.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export
otel-collector | go.opentelemetry.io/collector@v0.56.0/processor/batchprocessor/batch_processor.go:262
otel-collector | go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems
otel-collector | go.opentelemetry.io/collector@v0.56.0/processor/batchprocessor/batch_processor.go:176
otel-collector | go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle
otel-collector | go.opentelemetry.io/collector@v0.56.0/processor/batchprocessor/batch_processor.go:143
What version did you use?
otel/otel-collector-contrib:Latest docker-image.. Please check the docker-compose.yml file mentioned in the above repo.
Environment
Docker
Additional context
What I'm trying to do is setup small node.js application
Collect the traces via OTEL Collector
Make sure OTEL Collector sends data to Zipkin and as well as Elastic APM
axw
(Andrew Wilkins)
August 1, 2022, 1:54am
2
Hi @sathishsoundharajan , welcome to the community!
By default, APM Server listens for requests only on localhost, which won't work as expected within a Docker container. The Docker image ships with an apm-server.yml configuration file which overrides this default so that it listens on all network interfaces: apm-server/apm-server.docker.yml at d3858a1143e1736d6dd2ff79f153a4686bca45dd · elastic/apm-server · GitHub
As you are replacing the configuration file, this override is being lost. What you should do is modify your apm-server.yml slightly to set apm-server.host: 0.0.0.0:8200
.
I also noticed that you're using the "elastic" exporter from opentelemetry-collector-contrib. This exporter is deprecated, and I would strongly recommend using the OTLP exporter. APM Server natively supports OTLP/gRPC and OTLP/HTTP. For some example configuration, see OpenTelemetry integration | APM User Guide [8.3] | Elastic
1 Like
@axw Thank you for the solution. It works . About the "elastic" exporter. We are running elasticsearch version 6.8.x in production and kibana also similar to that version. Will add exact version info sooner.. So can i run APM latest version 8.3 and elastic 6.x ? Also can i run APM version 6.8 itself and use oltp/elastic exporter ?
@axw Seems for the below configuration, data is getting dropped because of this error
receivers:
zipkin:
otlp:
protocols:
grpc:
http:
processors:
batch:
timeout: 10s
exporters:
logging:
logLevel: debug
zipkin:
endpoint: "http://zipkin:9411/api/v2/spans"
otlp/elastic:
endpoint: http://elastic-apm:8200
tls:
insecure: true
extensions:
health_check:
service:
extensions: [health_check]
pipelines:
# metrics:
# receivers: [ otlp ]
traces:
receivers: [ zipkin, otlp ]
processors: [ batch ]
exporters: [ zipkin, otlp/elastic ]
APM Version Remains: 6.8.11
otel-collector | 2022-08-01T06:56:15.499Z error exporterhelper/queued_retry_inmemory.go:107 Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "traces", "name": "otlp/elastic", "error": "max elapsed time expired rpc error: code = Unavailable desc = connection closed before server preface received", "dropped_items": 13}
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.onTemporaryFailure
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/queued_retry_inmemory.go:107
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/queued_retry.go:199
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/traces.go:134
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/queued_retry_inmemory.go:119
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/internal/bounded_memory_queue.go:82
otel-collector | go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2
otel-collector | go.opentelemetry.io/collector@v0.56.0/exporter/exporterhelper/internal/bounded_memory_queue.go:69
@axw This is error i'm getting when i start the container with docker-compose.yml
otel-collector | 2022-08-01T10:04:18.540Z info service/collector.go:215 Starting otelcol-contrib... {"Version": "0.56.0", "NumCPU": 4}
otel-collector | 2022-08-01T10:04:18.540Z info service/collector.go:128 Everything is ready. Begin running and processing data.
otel-collector | 2022-08-01T10:04:19.330Z warn zapgrpc/zapgrpc.go:191 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
otel-collector | "Addr": "elastic-apm:8200",
otel-collector | "ServerName": "elastic-apm:8200",
otel-collector | "Attributes": null,
otel-collector | "BalancerAttributes": null,
otel-collector | "Type": 0,
otel-collector | "Metadata": null
otel-collector | }. Err: connection error: desc = "transport: Error while dialing dial tcp: lookup elastic-apm on 127.0.0.11:53: no such host" {"grpc_log": true}
I tried with latest APM Server too
axw
(Andrew Wilkins)
August 2, 2022, 5:50am
6
I see. APM Server 8.x is not compatible with Elasticsearch/Kibana 6.x, and APM Server 6.8 does not support OTLP.
This is error i'm getting when i start the container with docker-compose.yml
"transport: Error while dialing dial tcp: lookup elastic-apm on 127.0.0.11:53: no such host" suggests something is wrong with your docker-compose.yml, it's not an issue with APM Server. (I had a quick look but didn't spot an obvious mistake in your repo.)
Thanks for inputs.. @axw .. So my only choice is to run OTLP Collector with APM Server 6.x, Elasticsearch 6.x, Kibana 6.x.
If the transport error is fixed, then i can try to use otlp/elastic
as exporter along with above versions.
For the transport error, I'm also trying to figure out what could be the issue. It seems to occur only for otlp/elastic
exporter alone..
@axw Apologies!. Can't figure out why the transport error getting triggered.. Any pointers with respect to that would be much helpful for me..
axw
(Andrew Wilkins)
August 3, 2022, 1:27am
9
@sathishsoundharajan just reiterating:
I see. APM Server 8.x is not compatible with Elasticsearch/Kibana 6.x, and APM Server 6.8 does not support OTLP.
Because APM Server 6.8 does not support OTLP, you cannot use otlp/elastic
. You will need to use the deprecated elastic
exporter until you can upgrade your stack. So if the transport issue only occurs with otlp/elastic
, then I suppose switching back should fix the problem.
system
(system)
Closed
August 23, 2022, 9:27pm
10
This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.