APM - traces are not showing for transactions in Kibana APM tab but data was able to visualize in discover tab

Hi All,

I am trying to monitor my internal applications using Elastic apm and I have setup the APM using helm chart (https://artifacthub.io/packages/helm/elastic/apm-server#install-released-version-using-helm-repository)

below is the helm install command I have used to bring up the apm-server

helm install apm-server elastic/apm-server -f ./values.yaml -n intelligeni-core

Kibana version: 7.2.0

Elasticsearch version: 7.2.0

APM Server version: 7.2.0

APM Agent language and version:
rum.js - 5.x
node.js - 3.x

Browser version: Google chrome Version 86.0.4240.198 (Official Build) (64-bit)

Original install method (e.g. download page, yum, deb, from source, etc.) and version: Using helm chart to install apm-server https://artifacthub.io/packages/helm/elastic/apm-server#install-released-version-using-helm-repository

Fresh install or upgraded from other version? Newly setup apm-server

Is there anything special in your setup?
Yes we have enabled rum and related configurations in apm-server.yml and below is the value.yaml where you can see what the things I have defined for configurations,

---
# Allows you to add config files
apmConfig:
  apm-server.yml: |
    apm-server:
      host: "0.0.0.0:8200"
      max_event_size: 307200
    apm-server.rum.enabled: true
    apm-server.rum.event_rate.limit: 300
    apm-server.rum.event_rate.lru_size: 1000
    apm-server.rum.allow_origins: ['*']
    #apm-server.rum.allow_headers: ["header1", "header2"]
    apm-server.rum.library_pattern: "node_modules|bower_components|~"
    apm-server.rum.exclude_from_grouping: "^/webpack"
    apm-server.rum.source_mapping.enabled: true
    apm-server.rum.source_mapping.cache.expiration: 5m
    apm-server.rum.source_mapping.index_pattern: "apm-*-sourcemap*"  
    apm-server.kibana.enabled: true
    apm-server.kibana.host: "http://kibana-kibana:5601"
    #apm-server.dashboards.enabled: true

    queue: {}

    output.elasticsearch:
      hosts: ["http://elasticsearch-master:9200"]
      ## If you have security enabled- you'll need to add the credentials
      ## as environment variables
      # username: "${ELASTICSEARCH_USERNAME}"
      # password: "${ELASTICSEARCH_PASSWORD}"
      ## If SSL is enabled
      # protocol: https
      # ssl.certificate_authorities:
      #  - /usr/share/apm-server/config/certs/elastic-ca.pem

replicas: 1

extraContainers: ""
# - name: dummy-init
#   image: busybox
#   command: ['echo', 'hey']

extraInitContainers: ""
# - name: dummy-init
#   image: busybox
#   command: ['echo', 'hey']

# Extra environment variables to append to the DaemonSet pod spec.
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
  #  - name: 'ELASTICSEARCH_USERNAME'
  #    valueFrom:
  #      secretKeyRef:
  #        name: elastic-credentials
  #        key: username
  #  - name: 'ELASTICSEARCH_PASSWORD'
  #    valueFrom:
  #      secretKeyRef:
  #        name: elastic-credentials
  #        key: password

# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
#     name: env-secret
# - configMapRef:
#     name: config-map

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

image: "docker.elastic.co/apm/apm-server"
imageTag: "7.2.0"
imagePullPolicy: "IfNotPresent"
imagePullSecrets: []

# Whether this chart should self-manage its service account, role, and associated role binding.
managedServiceAccount: true

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# additionals labels
labels: {}

podSecurityContext:
  runAsUser: 0
  privileged: false

livenessProbe:
  httpGet:
    path: /
    port: http
  initialDelaySeconds: 30
  failureThreshold: 3
  periodSeconds: 10
  timeoutSeconds: 5

readinessProbe:
  httpGet:
    path: /
    port: http
  initialDelaySeconds: 30
  failureThreshold: 3
  periodSeconds: 10
  timeoutSeconds: 5

resources:
    requests:
      cpu: "100m"
      memory: "100Mi"
    limits:
      cpu: "1000m"
      memory: "512Mi"

# Custom service account override that the pod will use
serviceAccount: ""

# Annotations to add to the ServiceAccount that is created if the serviceAccount value isn't set.
serviceAccountAnnotations: {}
  # eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/k8s.clustername.namespace.serviceaccount

# A list of secrets and their paths to mount inside the pod
secretMounts: []
#  - name: elastic-certificate-pem
#    secretName: elastic-certificates
#    path: /usr/share/apm-server/config/certs

terminationGracePeriod: 30

tolerations: []

nodeSelector: {}

affinity: {}

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

updateStrategy:
  type: "RollingUpdate"

# Override various naming aspects of this chart
# Only edit these if you know what you're doing
nameOverride: ""
fullnameOverride: ""

autoscaling:
  enabled: false

ingress:
  enabled: true
  annotations:
     kubernetes.io/ingress.class: nginx
     nginx.ingress.kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - dev-apm.com
  tls:
    - secretName: dev-ca-cert
      hosts:
        - dev-apm.com

service:
  type: ClusterIP
  port: 8200
  nodePort: ""
  annotations: {}
    # cloud.google.com/load-balancer-type: "Internal"
    # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    # service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    # service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
    # service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"

lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

if the you check the values.yaml I have used nginx and apm-server is running behind the nginx and the endpoint I specified as https://dev-apm.com and in my nodejs and rumjs we have instrument the code related to respective apm agents and in kibana we are able to see the services in APM tab.

Now the problem is,
the data is start receiving to apm-server and indexed to elasticsearch but when I want to see traces of any transaction in kibana I am getting an error 404 in brower console as well as kibana logs and I couldn't understand why the traces are not visible in kibana.

Important Note:-
I have tried the same setup in one of my linux VM in GCP account where I have same version of applications is running and I was able to visuzalize the data in kibana of each traces for transactions but the same setup is not my customer environment.

please share your thoughts and correct me if I am doing anything wrong

Errors in browser console (if relevant):

https://dev.apm.com/kibana/api/apm/services/ig-core-new/transaction_groups/request/post%20%2Fapi%2Fauth%2Flogin/charts?start=2020-11-18t10%3A28%3A38.944
z&end=2020-11-19t10%3a26%3a43.897z&uifilterses=%255b%255d 404
GET
https://dev.apm.com/kibana/api/apm/services/ig-core-new/transaction_groups/request/post%20%2Fapi%2Fauth%2Flogin/distribution?start=2020-11-18t10%3A28%3A38.944z&end=2020-11-19t10%3A26%3A43.897z&uifilterses=%5b%5d 404

Errors in kibana console

{"type":"response","@timestamp":"2020-11-19T05:47:27Z","tags":,"pid":1,"method":"get","statusCode":404,"req":{"url":"/api/apm/services/ig-core-1/transaction_groups/request/GET%20/api/scenario/metadata/compute/distribution?start=2020-11-18T05%3A47%3A18.029Z&end=2020-11-19T05%3A47%3A18.030Z&transactionId=9003701201dbada7&traceId=2a9111885e6b7e63c3acc39768b1f443&uiFiltersES=%255B%255D","method":"get","headers":{"host":"dev.apm.com","x-request-id":"6cfd444dbca624bf144b4ed2dcbbd233","x-real-ip":"10.160.15.225","x-forwarded-for":"10.160.15.225","x-forwarded-proto":"https","x-forwarded-host":"dev.apm.com","x-forwarded-port":"443","x-scheme":"https","x-original-forwarded-for":"202.164.132.191:9478","x-original-url":"/kibana/api/apm/services/ig-core-1/transaction_groups/request/GET%20%2Fapi%2Fscenario%2Fmetadata%2Fcompute/distribution?start=2020-11-18T05%3A47%3A18.029Z&end=2020-11-19T05%3A47%3A18.030Z&transactionId=9003701201dbada7&traceId=2a9111885e6b7e63c3acc39768b1f443&uiFiltersES=%255B%255D","x-appgw-trace-id":"4ebb8a9b7c687740b1437525e03d5348","x-original-host":"dev.apm.com","kbn-version":"7.2.0","user-agent":"Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36","content-type":"application/json","accept":"/","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://dev.apm.com/kibana/app/apm","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9"},"remoteAddress":"10.24.20.9","userAgent":"10.24.20.9","referer":"https://dev.apm.com/kibana/app/apm"},"res":{"statusCode":404,"responseTime":29,"contentLength":9},"message":"GET /api/apm/services/ig-core-1/transaction_groups/request/GET%20/api/scenario/metadata/compute/distribution?start=2020-11-18T05%3A47%3A18.029Z&end=2020-11-19T05%3A47%3A18.030Z&transactionId=9003701201dbada7&traceId=2a9111885e6b7e63c3acc39768b1f443&uiFiltersES=%255B%255D 404 29ms - 9.0B"}

Provide logs and/or server output (if relevant):

2020-11-19T10:44:43.080Z INFO [request] beater/common_handler.go:184 handled request {"request_id": "bda8f8dc-72f0-4692-ba02-07c454f5e8ae", "method": "POST", "URL": "/intake/v2/events", "content_length": 1490, "remote_address": "10.160.0.14", "user-agent": "elasticapm-node/3.8.0 elastic-apm-http-client/9.4.1 node/10.17.0", "response_code": 202}
2020-11-19T10:45:02.941Z INFO [request] beater/common_handler.go:184 handled request {"request_id": "5499eda7-33d3-45fe-ab51-ffd6921cbf36", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "10.24.21.1", "user-agent": "kube-probe/1.15+", "response_code": 200}
2020-11-19T10:45:10.020Z INFO [request] beater/common_handler.go:184 handled request {"request_id": "7716f8be-ad0d-4da4-9a20-6a2f37b332ba", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "10.24.21.1", "user-agent": "kube-probe/1.15+", "response_code": 200}
2020-11-19T10:45:12.941Z INFO [request] beater/common_handler.go:184 handled request {"request_id": "92aa0a38-5a70-4768-97c2-45cdbc17af48", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "10.24.21.1", "user-agent": "kube-probe/1.15+", "response_code": 200}
2020-11-19T10:45:13.082Z INFO [request] beater/common_handler.go:184 handled request {"request_id": "afda8315-41c7-41df-a3fd-ee499c3aa872", "method": "POST", "URL": "/intake/v2/events", "content_length": 1488, "remote_address": "10.160.0.14", "user-agent": "elasticapm-node/3.8.0 elastic-apm-http-client/9.4.1 node/10.17.0", "response_code": 202}
2020-11-19T10:45:20.020Z INFO [request] beater/common_handler.go:184 handled request {"request_id": "34c486b4-24d5-474c-a432-eead5954018c", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "10.24.21.1", "user-agent": "kube-probe/1.15+", "response_code": 200}

Thanks,
Ganeshbabu R