Getting RUM to connect to APM server in k8s

Kibana version:
7.13.2
Elasticsearch version:
7.13.2
APM Server version:
7.13.2
APM Agent language and version:
1.3.0 React typescript
Browser version:
Chrome Version 91.0.4472.114
Original install method (e.g. download page, yum, deb, from source, etc.) and version:

I have a minikube running on linux. The elastic stash is running in the logging namespace using the following setup

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: logging
spec:
  version: 7.13.2
  nodeSets:
  - name: default
    count: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: logging
spec:
  version: 7.13.2
  count: 1
  elasticsearchRef:
    name: elasticsearch
  http:
    service:
      spec:
        type: ClusterIP
---
apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
  name: apm-server
  namespace: logging
spec:
  version: 7.13.2
  count: 1
  elasticsearchRef:
    name: "elasticsearch"
  config:
    apm-server:
      rum.enabled: true
      rum.allow-origin: ['*']
      rum.allow-methods: ["OPTIONS", "HEAD", "GET", "POST", "PUT", "DELETE"]
      rum.allow-headers: ["Authorization", "X-Requested-With","X-Auth-Token","Content-Type", "Content-Length"]      
      ilm.enabled: true
  http:
    service:
      spec:
        type: ClusterIP
    tls:
      selfSignedCertificate:
        disabled: true
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-kibana
    kubernetes.io/ingress.class: nginx
    nginx.org/proxy-connect-timeout: "30s"
    nginx.org/proxy-read-timeout: "20s"
    nginx.org/proxy-send-timeout: "60s"
    nginx.org/client-max-body-size: "4m"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  labels:
    app: kibana
  name: kibana-kibana
  namespace: logging
spec:
  rules:
  - host: kibana.NOTDISCLOSED.com
    http:
      paths:
      - backend:
          service:
            name: kibana-kb-http
            port:
              number: 5601
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - kibana.NOTDISCLOSED.com
    secretName: kibana-NOTDISCLOSED-com-tls2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-kibana
    kubernetes.io/ingress.class: nginx
    nginx.org/proxy-connect-timeout: "30s"
    nginx.org/proxy-read-timeout: "20s"
    nginx.org/proxy-send-timeout: "60s"
    nginx.org/client-max-body-size: "4m"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, OPTIONS, DELETE"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
    nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
    nginx.ingress.kubernetes.io/cors-allow-headers: "Access-Control-Allow-Headers, Origin,Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers"
  labels:
    app: apm-server
  name: apm-server-ingress
  namespace: logging
spec:
  rules:
  - host: apm.NOTDISCLOSED.com
    http:
      paths:
      - backend:
          service:
            name: apm-server-apm-http
            port:
              number: 8200
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - apm.NOTDISCLOSED.com
    secretName: apm-NOTDISCLOSED-com-tls

the minikube is access through a reverse proxy nginx that works for kibana, relevant config:

stream {

map $ssl_preread_server_name $name {
kibana.NONDISCLOSED.com https_backend;
apm.NONDISCLOSED.com https_backend;
default https_default_backend;
}

upstream https_backend {
    server 192.168.49.2:443;
}

upstream https_default_backend {
    server 192.168.49.2:31560;
}

server {
listen 443;
proxy_pass $name;
ssl_preread on;
}

Fresh install or upgraded from other version?

Fresh install

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.

the microservices (java backends, react FE) is running in other namespaces (stage, prod) and with istio, elastic is not running with istio since I cannot get it working. Java service connects correctly and works from inside the minikube cluster after some secret namespace copying using

      - name: ELASTIC_APM_SERVER_URL 
        value: "http://apm-server-apm-http.logging:8200" 

Since the react part is running in the browser my assumption is that I need https to access and the routing config looks like this

import React, { Component} from 'react';
import { init as initApm } from '@elastic/apm-rum'
import { ApmRoute } from '@elastic/apm-rum-react'
import {
  BrowserRouter as Router,
  Redirect
} from 'react-router-dom'

import LandingPage from './Landingpage';
import Login from './Login';
import MainMenu from './MainMenu';
import PrivateRoute from "./Privateroute";
import Signup from './Signup';

export const apm = initApm({
    serviceName: 'web-client',
    serverUrl: 'https://apm.NOTDISCLOSED.com',
    environment: 'stage',
    debug: true
  });
  

class RoutePage extends Component {
    render() {
        return (
            <Router>
                <Redirect to={'/landingpage'}/> 
                <div>
                    <ApmRoute path="/landingpage" component={LandingPage}/>
                    <ApmRoute path="/getstarted" component={Signup}/>
                    <ApmRoute path="/login" component={Login}/>
                    <PrivateRoute path="/logedin" >
                        <MainMenu/>                            
                    </PrivateRoute>
                </div>
            </Router>
        );
    }
}

export default RoutePage;

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):

When going to react FE there are CORS errors. If I run chrome using google-chrome --disable-web-security --user-data-dir=/tmp/ everything works and the FE react appears in kibana under APM.

Steps to reproduce:

  1. start minikube
  2. create elastic stack according to above in namespace logging (using url:s you control)
  3. create a FE react application in namespace stage and access it in chrome

Errors in browser console (if relevant):

Access to XMLHttpRequest at 'https://apm.NONDISCLOSED.com/intake/v2/rum/events' from origin 'https://staging.NONDISCLOSED.com' has been blocked by CORS policy: Request header field content-encoding is not allowed by Access-Control-Allow-Headers in preflight response.
logging-service.js:50 [Elastic APM] Failed sending events! Error: https://apm.NONDISCLOSED.com/intake/v2/rum/events HTTP status: 0
at e.t._constructError (apm-server.js:108)
at apm-server.js:37
levels.forEach.e. @ logging-service.js:50
apm.NONDISCLOSED.com/intake/v2/rum/events:1 Failed to load resource: net::ERR_FAILED

Provide logs and/or server output (if relevant):

Hi @Maguno,

Thanks for creating the issue and also proving the detailed information.

APM server already has Content-Encoding header in the allowed header list - Configure Real User Monitoring (RUM) | APM Server Reference [7.13] | Elastic

Do you think its a problem with your configuration. I see from your config that apm-server-ingress does not contain the header Content-Encoding as part of the allowed CORS list.

 nginx.ingress.kubernetes.io/cors-allow-headers: "Access-Control-Allow-Headers, Origin,Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers"

Could you try adding the header to the allowed list here?

Thanks,
Vignesh

Excellent, thanks for a great solution. Now it works.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.