APM environment variables misconfigured

Kibana version: 7.3.2

Elasticsearch version: 7.3.2

APM Server version: 7.3.2

APM Agent language and version:
go.elastic.co/apm v1.5.0
go.elastic.co/apm/module/apmecho v1.5.0
go.elastic.co/apm/module/apmsql v1.5.0

Browser version: N/A

Original install method (e.g. download page, yum, deb, from source, etc.) and version: Elastic Cloud

Fresh install or upgraded from other version? N/A

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

I have a Kubernetes Pod that is pushing to Elastic Cloud via the Go APM echo module. It has the following settings:

⇒ kc -n pg-10 exec -it api-v3-5c77c77db6-xh7sx sh
/app # env | grep ELASTIC
ELASTIC_APM_ENVIRONMENT=pg-10
ELASTIC_APM_SECRET_TOKEN=<redacted>
ELASTIC_APM_IGNORE_URLS=/metrics
ELASTIC_APM_SERVICE_NAME=api-v3
ELASTIC_APM_SERVER_URL=<redacted>

In Kibana I'm still seeing /metrics calls, see the image I've saved at https://i.imgur.com/8ijkjHQ.png

How do I go about also excluding all "/metrics" and "/" routes. But also I'm interested in excluding all metric data completely, however I tried ELASTIC_APM_METRICS_INTERVAL=0s but the metrics index is still receiving data.

I've also tried all all of these environment variables also but ENVIRONMENT and SERVICE_NAME seem to be the only variables affecting anything:

ELASTIC_APM_METRICS_INTERVAL=0s
ELASTIC_APM_DISABLE_METRICS=*
ELASTIC_APM_CAPTURE_HEADERS=false
ELASTIC_APM_BREAKDOWN_METRICS=false
ELASTIC_APM_IGNORE_URLS=/metrics, /
ELASTIC_APM_STACK_TRACE_LIMIT=0

Appreciate any insight into how to configure this correctly.

Hi @alexclifford, welcome to the forum!

This is an odd one. I've just confirmed that the environment variables are honoured in a simple Echo application:

package main

import (
        "github.com/labstack/echo"
        "go.elastic.co/apm/module/apmecho"
)

func main() {
        e := echo.New()
        e.Use(apmecho.Middleware())
        e.GET("/", func(c echo.Context) error { return nil })
        e.GET("/metrics", func(c echo.Context) error { return nil })
        e.Logger.Fatal(e.Start(":8080"))
}

Are you passing any options into apmecho.Middleware()?

Setting ELASTIC_APM_METRICS_INTERVAL=0s correctly disables the metrics, and ELASTIC_APM_IGNORE_URLS=/metrics filters out transactions with matching URL path.

Are you defining the environment variables through the pod spec? Can you confirm that the server process has picked up the env vars? e.g. by running

cat /proc/`pidof "your program name"`/environ | tr '\0' '\n' | grep ELASTIC

Hi @axw thanks for testing this and your input, it has helped point me in the right direction.

I can see in the Kubernetes Pod that the process running does have all the correct environment variables available to it, at least at the time of running viewing the proc environ file.
But something is definitely at play here as it still refuses to honour them... some of the time! One namespace/environment running the process managed to pick them up and is working as expected. Identical namespaces just with a different context.service.environment value is in the same predicament. Restarting Pods doesn't seem to fix them.

I've been able to test locally with the same environment variables and there is never an issue so it seems like a race condition on the Kubernetes Pods where Echo is loading before the environment variables are available? Quite bizarre.

I will keep debugging Kubernetes to see if I can find a solution, thanks again for your help and suggestions.

Bizarre indeed!

Are you passing environment through the kubernetes pod spec/container definition? Or are you doing something else, like loading the environment from a file? The ELASTIC_APM_* environment variables are evaluated at program initialisation time, so if the environment variables aren't set when the process is first created, that might be the issue.

If you find out the cause, I'll be interested to hear back.

I'm unsure what the issue was. I removed all of our application namespaces in Kubernetesand re-created them pointing at a new Elastic Cloud deployment and the issue was no longer there. I'm guessing it was something on our Kubernetes side.

Thanks for your help.

1 Like

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.