APM Servers respond with 503 "request timed out"

Hi,

We see a lot of the following error logs in our APM servers:

{
	"log.level": "error",
	"@timestamp": "2023-05-05T17:02:56.871Z",
	"log.logger": "request",
	"log.origin": {
		"file.name": "middleware/log_middleware.go",
		"file.line": 58
	},
	"message": "request timed out",
	"service.name": "apm-server",
	"url.original": "/intake/v2/events",
	"http.request.method": "POST",
	"user_agent.original": "apm-agent-nodejs/3.44.1 (<agent name>)",
	"source.address": "127.0.0.6",
	"http.request.id": "1a40a058-531b-4e7b-b6c5-b36afaf289f5",
	"event.duration": 5000551444,
	"http.response.status_code": 503,
	"error.message": "request timed out",
	"ecs.version": "1.6.0"
}

Multiple APM servers run as Kubernetes pods to handle traces in our microservice environment. We are using APM Server version 8.5.3 and what is very strange is that the resource utilization in those pods is minimal, with less than 0.1 cores used and 200MB of memory used where the container has 1 entire core assigned and over 750MB. There is no evidence of any resource bottleneck, but we still get these errors.

We do not see any problem with the monitoring tools, here is the screenshot from the Stack Monitoring:

What we see on the agents is the following type error message:

APM Server transport error: APM Server response timeout (5000ms)

Thanks,

Zareh

@zvazquez can you please share the configuration for the apm-server pods, and any APM agent configuration for the Node.js application?

By default, agents may keep an HTTP request open to the server for up to 10 seconds. If that hasn't been changed, and a 5 second timeout has been specified (which the error suggests), then that may explain why this is happening.

Hi @axw ,

here is the configuration for an APM Server Pod:

spec:
  volumes:
    - name: workload-socket
      emptyDir: {}
    - name: workload-certs
      emptyDir: {}
    - name: istio-envoy
      emptyDir:
        medium: Memory
    - name: istio-data
      emptyDir: {}
    - name: istio-podinfo
      downwardAPI:
        items:
          - path: labels
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
          - path: annotations
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
        defaultMode: 420
    - name: istio-token
      projected:
        sources:
          - serviceAccountToken:
              audience: istio-ca
              expirationSeconds: 43200
              path: istio-token
        defaultMode: 420
    - name: istiod-ca-cert
      configMap:
        name: istio-ca-root-cert
        defaultMode: 420
    - name: apm-cert
      secret:
        secretName: apm-cert
        defaultMode: 420
    - name: apmserver-data
      emptyDir: {}
    - name: config
      secret:
        secretName: elk-apm-jaeger-apm-config
        defaultMode: 420
        optional: false
    - name: config-volume
      emptyDir: {}
    - name: elastic-internal-http-certificates
      secret:
        secretName: elk-apm-jaeger-apm-http-certs-internal
        defaultMode: 420
        optional: false
    - name: elastic-internal-secure-settings
      secret:
        secretName: elk-apm-jaeger-apm-secure-settings
        defaultMode: 420
        optional: false
    - name: es-cert
      secret:
        secretName: es-cert
        defaultMode: 420
  initContainers:
    - name: elastic-internal-init-keystore
      image: docker.elastic.co/apm/apm-server:8.5.3
      command:
        - /usr/bin/env
        - bash
        - '-c'
        - "#!/usr/bin/env bash\n\nset -eux\n\nkeystore_initialized_flag=/usr/share/apm-server/data/elastic-internal-init-keystore.ok\n\nif [[ -f \"${keystore_initialized_flag}\" ]]; then\n    echo \"Keystore already initialized.\"\n\texit 0\nfi\n\necho \"Initializing keystore.\"\n\n# create a keystore in the default data path\n/usr/share/apm-server/apm-server keystore create --force\n\n# add all existing secret entries into it\nfor filename in  /mnt/elastic-internal/secure-settings/*; do\n\t[[ -e \"$filename\" ]] || continue # glob does not match\n\tkey=$(basename \"$filename\")\n\techo \"Adding \"$key\" to the keystore.\"\n\t/usr/share/apm-server/apm-server keystore add \"$key\" --stdin < \"$filename\"\ndone\n\ntouch /usr/share/apm-server/data/elastic-internal-init-keystore.ok\necho \"Keystore initialization successful.\"\n"
      env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
      resources:
        limits:
          cpu: 100m
          memory: 128Mi
        requests:
          cpu: 100m
          memory: 128Mi
      volumeMounts:
        - name: apm-cert
          readOnly: true
          mountPath: /etc/apm-cert
        - name: apmserver-data
          mountPath: /usr/share/apm-server/data
        - name: config
          readOnly: true
          mountPath: /usr/share/apm-server/config/config-secret
        - name: config-volume
          mountPath: /usr/share/apm-server/config
        - name: elastic-internal-http-certificates
          readOnly: true
          mountPath: /mnt/elastic-internal/http-certs
        - name: elastic-internal-secure-settings
          readOnly: true
          mountPath: /mnt/elastic-internal/secure-settings
        - name: es-cert
          readOnly: true
          mountPath: /etc/es-cert
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        privileged: false
    - name: istio-init
      image: docker.io/istio/proxyv2:1.14.3
      args:
        - istio-iptables
        - '-p'
        - '15001'
        - '-z'
        - '15006'
        - '-u'
        - '1337'
        - '-m'
        - REDIRECT
        - '-i'
        - '*'
        - '-x'
        - ''
        - '-b'
        - '*'
        - '-d'
        - 15090,15021,15020
      resources:
        limits:
          cpu: '2'
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        capabilities:
          add:
            - NET_ADMIN
            - NET_RAW
          drop:
            - ALL
        privileged: false
        runAsUser: 0
        runAsGroup: 0
        runAsNonRoot: false
        readOnlyRootFilesystem: false
        allowPrivilegeEscalation: false
  containers:
    - name: apm-server
      image: docker.elastic.co/apm/apm-server:8.5.3
      command:
        - apm-server
        - run
        - '-e'
        - '-c'
        - config/config-secret/apm-server.yml
      ports:
        - name: https
          containerPort: 8200
          protocol: TCP
      envFrom:
        - secretRef:
            name: elk-secrets
      env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: SECRET_TOKEN
          valueFrom:
            secretKeyRef:
              name: elk-apm-jaeger-apm-token
              key: secret-token
      resources:
        limits:
          cpu: 1500m
          memory: 2G
        requests:
          cpu: 1200m
          memory: 750Mi
      volumeMounts:
        - name: apm-cert
          readOnly: true
          mountPath: /etc/apm-cert
        - name: apmserver-data
          mountPath: /usr/share/apm-server/data
        - name: config
          readOnly: true
          mountPath: /usr/share/apm-server/config/config-secret
        - name: config-volume
          mountPath: /usr/share/apm-server/config
        - name: elastic-internal-http-certificates
          readOnly: true
          mountPath: /mnt/elastic-internal/http-certs
        - name: es-cert
          readOnly: true
          mountPath: /etc/es-cert
      readinessProbe:
        httpGet:
          path: /app-health/apm-server/readyz
          port: 15020
          scheme: HTTP
        timeoutSeconds: 1
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 3
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
    - name: istio-proxy
      image: docker.io/istio/proxyv2:1.14.3
      args:
        - proxy
        - sidecar
        - '--domain'
        - $(POD_NAMESPACE).svc.cluster.local
        - '--proxyLogLevel=warning'
        - '--proxyComponentLogLevel=misc:error'
        - '--log_output_level=default:info'
        - '--concurrency'
        - '2'
      ports:
        - name: http-envoy-prom
          containerPort: 15090
          protocol: TCP
      env:
        - name: JWT_POLICY
          value: third-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        - name: PROXY_CONFIG
          value: |
            {}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"name":"https","containerPort":8200,"protocol":"TCP"}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: apm-server
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: elk-apm-jaeger-apm-server
        - name: ISTIO_META_OWNER
          value: >-
            kubernetes://apis/apps/v1/namespaces/elastic-apm/deployments/elk-apm-jaeger-apm-server
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: TRUST_DOMAIN
          value: cluster.local
        - name: ISTIO_KUBE_APP_PROBERS
          value: >-
            {"/app-health/apm-server/readyz":{"httpGet":{"path":"/","port":8200,"scheme":"HTTPS"},"timeoutSeconds":1}}
      resources:
        limits:
          cpu: '2'
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi
      volumeMounts:
        - name: workload-socket
          mountPath: /var/run/secrets/workload-spiffe-uds
        - name: workload-certs
          mountPath: /var/run/secrets/workload-spiffe-credentials
        - name: istiod-ca-cert
          mountPath: /var/run/secrets/istio
        - name: istio-data
          mountPath: /var/lib/istio/data
        - name: istio-envoy
          mountPath: /etc/istio/proxy
        - name: istio-token
          mountPath: /var/run/secrets/tokens
        - name: istio-podinfo
          mountPath: /etc/istio/pod
      readinessProbe:
        httpGet:
          path: /healthz/ready
          port: 15021
          scheme: HTTP

here the configuration for APM:

apm-server:
  auth:
    secret_token: ${apm.token}
  host: :8200
  instrumentation:
    enabled: true
    environment: pre-prod
    hosts:
    - https://
    profiling:
      cpu:
        enable: true
      heap:
        enable: true
    secret_token: 
  jaeger:
    grpc:
      enabled: true
      host: :14250
    http:
      enabled: true
      host: :14268
  kibana:
    enabled: true
    host: ${KIBANA_HOST}:${KIBANA_PORT}
    password: ${ELASTICSEARCH_PASSWORD}
    protocol: https
    ssl:
      enabled: true
      verification_mode: none
    username: ${ELASTICSEARCH_USERNAME}
  secret_token: ${apm.token}
  ssl:
    certificate: /etc/apm-cert/tls.crt
    enabled: true
    key: /etc/apm-cert/tls.key
    key_passphrase: ${apm.key_passphrase}
logging:
  level: error
  to_stderr: true
monitoring:
  cluster_uuid: 
  enabled: true
output:
  bulk_max_size: 5120
  elasticsearch:
    hosts: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
    password: ${ELASTICSEARCH_PASSWORD}
    protocol: https
    ssl:
      verification_mode: none
    username: ${ELASTICSEARCH_USERNAME}
    worker: 20
queue:
  mem:
    events: 20480

here the configuration for one of the NodeJS pods:

spec:
  volumes:
    - name: workload-socket
      emptyDir: {}
    - name: workload-certs
      emptyDir: {}
    - name: istio-envoy
      emptyDir:
        medium: Memory
    - name: istio-data
      emptyDir: {}
    - name: istio-podinfo
      downwardAPI:
        items:
          - path: labels
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
          - path: annotations
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
        defaultMode: 420
    - name: istio-token
      projected:
        sources:
          - serviceAccountToken:
              audience: istio-ca
              expirationSeconds: 43200
              path: istio-token
        defaultMode: 420
    - name: istiod-ca-cert
      configMap:
        name: istio-ca-root-cert
        defaultMode: 420
    - name: kube-api-access-5s5hg
      projected:
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              name: kube-root-ca.crt
              items:
                - key: ca.crt
                  path: ca.crt
          - downwardAPI:
              items:
                - path: namespace
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
        defaultMode: 420
  initContainers:
    - name: istio-init
      image: docker.io/istio/proxyv2:1.14.3
      args:
        - istio-iptables
        - '-p'
        - '15001'
        - '-z'
        - '15006'
        - '-u'
        - '1337'
        - '-m'
        - REDIRECT
        - '-i'
        - '*'
        - '-x'
        - ''
        - '-b'
        - '*'
        - '-d'
        - 15090,15021,15020
      resources:
        limits:
          cpu: '2'
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi
      volumeMounts:
        - name: kube-api-access-5s5hg
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        capabilities:
          add:
            - NET_ADMIN
            - NET_RAW
          drop:
            - ALL
        privileged: false
        runAsUser: 0
        runAsGroup: 0
        runAsNonRoot: false
        readOnlyRootFilesystem: false
        allowPrivilegeEscalation: false
  containers:
    - name: pod-name
      image: 
      ports:
        - containerPort: 50051
          protocol: TCP
      env:
        - name: K8S_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
      resources:
        limits:
          cpu: '1'
          memory: 4Gi
        requests:
          cpu: 100m
          memory: 384Mi
      volumeMounts:
        - name: kube-api-access-5s5hg
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      livenessProbe:
        httpGet:
          path: 
          port: 15020
          scheme: HTTP
        initialDelaySeconds: 15
        timeoutSeconds: 10
        periodSeconds: 5
        successThreshold: 1
        failureThreshold: 5
      readinessProbe:
        httpGet:
          path: 
          port: 15020
          scheme: HTTP
        initialDelaySeconds: 5
        timeoutSeconds: 1
        periodSeconds: 2
        successThreshold: 1
        failureThreshold: 20
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: Always
    - name: istio-proxy
      image: docker.io/istio/proxyv2:1.14.3
      args:
        - proxy
        - sidecar
        - '--domain'
        - $(POD_NAMESPACE).svc.cluster.local
        - '--proxyLogLevel=warning'
        - '--proxyComponentLogLevel=misc:error'
        - '--log_output_level=default:info'
        - '--concurrency'
        - '2'
      ports:
        - name: http-envoy-prom
          containerPort: 15090
          protocol: TCP
      env:
        - name: JWT_POLICY
          value: third-party-jwt
        - name: PILOT_CERT_PROVIDER
          value: istiod
        - name: CA_ADDR
          value: istiod.istio-system.svc:15012
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        - name: PROXY_CONFIG
          value: |
            {}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
                {"containerPort":50051,"protocol":"TCP"}
            ]
        - name: ISTIO_META_APP_CONTAINERS
          value: 
        - name: ISTIO_META_CLUSTER_ID
          value: Kubernetes
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_META_WORKLOAD_NAME
          value: 
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/apps/v1/namespaces/
        - name: ISTIO_META_MESH_ID
          value: cluster.local
        - name: TRUST_DOMAIN
          value: cluster.local
        - name: ISTIO_KUBE_APP_PROBERS
          value: >-
            {"/app-health//livez":{"httpGet":{"path":"/service/health","port":8080,"scheme":"HTTP"},"timeoutSeconds":10},"/app-health//readyz":{"httpGet":{"path":"/service/health","port":8080,"scheme":"HTTP"},"timeoutSeconds":1}}
      resources:
        limits:
          cpu: '2'
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi
      volumeMounts:
        - name: workload-socket
          mountPath: /var/run/secrets/workload-spiffe-uds
        - name: workload-certs
          mountPath: /var/run/secrets/workload-spiffe-credentials
        - name: istiod-ca-cert
          mountPath: /var/run/secrets/istio
        - name: istio-data
          mountPath: /var/lib/istio/data
        - name: istio-envoy
          mountPath: /etc/istio/proxy
        - name: istio-token
          mountPath: /var/run/secrets/tokens
        - name: istio-podinfo
          mountPath: /etc/istio/pod
        - name: kube-api-access-5s5hg
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      readinessProbe:
        httpGet:
          path: /healthz/ready
          port: 15021
          scheme: HTTP
        initialDelaySeconds: 1
        timeoutSeconds: 3
        periodSeconds: 2
        successThreshold: 1
        failureThreshold: 30
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        capabilities:
          drop:
            - ALL
        privileged: false
        runAsUser: 1337
        runAsGroup: 1337
        runAsNonRoot: true
        readOnlyRootFilesystem: true
        allowPrivilegeEscalation: false
  

Here the environment variables used in the pod to control the APM Agent behavior:

ELASTIC_APM_APPLICATION_PACKAGES : 
ELASTIC_APM_CAPTURE_BODY : off
ELASTIC_APM_CAPTURE_HEADERS : off
ELASTIC_APM_CENTRAL_CONFIG : false
ELASTIC_APM_CIRCUIT_BREAKER_ENABLED : true
ELASTIC_APM_DISABLE_METRICS : *.cpu.*,*.memory.*,system.*
ELASTIC_APM_DISABLE_SEND : false
ELASTIC_APM_ENABLED : true
ELASTIC_APM_ENVIRONMENT : 
ELASTIC_APM_IGNORE_MESSAGE_QUEUES : *
ELASTIC_APM_LOG_FORMAT_SOUT : JSON
ELASTIC_APM_LOG_LEVEL : ERROR
ELASTIC_APM_MAX_QUEUE_SIZE : 1024
ELASTIC_APM_METRICS_INTERVAL : 60s
ELASTIC_APM_SECRET_TOKEN : 
ELASTIC_APM_SERVER_TIMEOUT : 5s
ELASTIC_APM_SERVER_URL : https://
ELASTIC_APM_SPAN_MIN_DURATION : 10ms
ELASTIC_APM_SPAN_STACK_TRACE_MIN_DURATION : 50ms
ELASTIC_APM_STACK_TRACE_LIMIT : 10
ELASTIC_APM_TRANSACTION_MAX_SPANS : 100
ELASTIC_APM_TRANSACTION_SAMPLE_RATE : 1
ELASTIC_APM_VERIFY_SERVER_CERT : false
ELASTIC_APM_DISABLE_INSTRUMENTATIONS : pg
ELASTIC_APM_INSTRUMENT_INCOMING_HTTP_REQUESTS : false
ELASTIC_APM_SERVICE_NAME : 
ELASTIC_APM_SERVICE_NODE : 
ELASTIC_APM_TRANSACTION_SAMPLE_RATE : 0.3

Thanks,

Zareh

@axw
Is this what you conveyed?

Sorry, I was a bit unclear. What I meant was if a 5 second request timeout had been specified at APM Server, then that might explain why requests from the agent were being timed out. That does not appear to be the case.

Can you share the apm-server log? Are there any errors? It may be that the queue is filling up because of Elasticsearch not being able to ingest quickly enough.

Hi AXW,

I looked into the last 24 hrs from 6 different K8 clusters and the only other message that I found was something like this:

{"log.level":"error","@timestamp":"2023-05-17T17:02:42.156Z","log.logger":"beater.http","log.origin":{"file.name":"http/server.go","file.line":3195},"message":"http: TLS handshake error from 127.0.0.6:40685: remote error: tls: bad certificate","service.name":"apm-server","ecs.version":"1.6.0"}

What is clear is that we have lot of these 503s, but certainly they are only coming from NodeJS services, none of the JAVA or Python services are showing these errors:

{"log.level":"error","@timestamp":"2023-05-17T16:46:04.312Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":58},"message":"request timed out","service.name":"apm-server","url.original":"/intake/v2/events","http.request.method":"POST","user_agent.original":"apm-agent-nodejs/3.45.0 *-svc 0.35.0)","source.address":"127.0.0.6","http.request.id":"1c8d50fa-fd60-4541-a9c8-4884a357831b","event.duration":5000248909,"http.response.status_code":503,"error.message":"request timed out","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-05-17T16:46:08.302Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":58},"message":"request timed out","service.name":"apm-server","url.original":"/intake/v2/events","http.request.method":"POST","user_agent.original":"apm-agent-nodejs/3.45.0 *-web-svc 0.35.0)","source.address":"127.0.0.6","http.request.id":"29114f53-0eac-456d-8b56-699be54a8554","event.duration":5001018346,"http.response.status_code":503,"error.message":"request timed out","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-05-17T16:46:10.198Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":58},"message":"request timed out","service.name":"apm-server","url.original":"/intake/v2/events","http.request.method":"POST","user_agent.original":"apm-agent-nodejs/3.45.0 (*-service-svc 0.0.1)","source.address":"127.0.0.6","http.request.id":"fe03b624-6a68-4cb5-8605-f0ebd3dcda4b","event.duration":5000814492,"http.response.status_code":503,"error.message":"request timed out","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-05-17T16:46:10.796Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":58},"message":"request timed out","service.name":"apm-server","url.original":"/intake/v2/events","http.request.method":"POST","user_agent.original":"apm-agent-nodejs/3.33.0 (*-service 1.0.0)","source.address":"127.0.0.6","http.request.id":"b170c23f-5bcf-4393-a814-5a885682dc87","event.duration":5000939136,"http.response.status_code":503,"error.message":"request timed out","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-05-17T16:46:12.098Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":58},"message":"request timed out","service.name":"apm-server","url.original":"/intake/v2/events","http.request.method":"POST","user_agent.original":"apm-agent-nodejs/3.45.0 (*-service-svc 0.0.1)","source.address":"127.0.0.6","http.request.id":"1205624f-953f-4b31-ad73-6a9835631dd4","event.duration":5000905528,"http.response.status_code":503,"error.message":"request timed out","ecs.version":"1.6.0"}

Certainly, it is only happening in NodeJS.

Thanks for your help,

Zareh