Why does apm-server not display frontend information on kibana if apm javascript agent accesses requests from ingress

request route

RUM config

If I filter dotnet (Backend), it's empty

Did I ignore something?

Hi @wajika

Thanks for creating the issue. Could you please provide more details on what information you are currently not able to see in the UI? Is it Transactions or Errors?

Also from your config, The RUM agent instrumentation is set to false which will disable the auto-instrumentation and distributed tracing also will not work as expected unless the header is propagated manually.

I am kind of lost on the second picture, Are you filtering for documents from rum agent or from dotnet agent?

Thanks,
Vignesh

@vigneshshanmugam Thank you for your reply.

1.There is no transaction and error on kibana.
2. RUM agent instrumentation,Isn't it true by default?


3.If ingress is not used, then apm is normal (can generate frontend transactions and errors).
4.Regarding the second picture, I want to exclude dotnet information.

I guess it's missing some parameters?

Thanks for the details.

Yes, instrument is set to true by default. But 1st picture with the RUM config sets this flag to false which would mean the application is not instrumented.

If ingress is not used, then apm is normal (can generate frontend transactions and errors).

Interesting, if by any chance the ingress is dropping the requests from client/RUM agent ? Can you check the debug logs from the RUM agent logLevel: debug if transactions are being created and also check browser console if there are errors in sending those events to the ingress?

Regarding the second picture, I want to exclude dotnet information.

Thanks got it now.

Thanks,
Vignesh

I did not find any errors on the browser console.




I think everything seems normal.

Regarding instrument, if instrument: true, tansactions will be generated every time the webpage is clicked.
Does this produce too much information?

Requests for a single interface.


APM server status is normal

I found that using nginx 'proxy_pass' can also produce tansactions.
upstream apm {
server 192.168.10.250:8200;
}

server {
server_name apmserver.xxx.com;
location / {
proxy_pass http://apm;
}
}

I still don't know what happened.

@vigneshshanmugam hello. can you provide some ideas?

@wajika do I understand your previous message correct that the monitoring and ingestion worked fine with the nginx proxy in between? If so, there might be a connection issue without the proxy.
Can you please check the APM Server log outputs for the setup where no events are indexed in ES and see if any errors are logged.

Sorry, I didn't understand what you meant. Do you think it's proxy's problem?
I didn't find any errors on apm-server pod. @simitt

I was interpreting your recent comment that you did manage to insert data into Elasticsearch through APM Server, when setting up nginx. Is that right?
If yes, my assumption is indeed that the issues you see might be related to a connectivity issue in your setup.

I analyzed the logs of apm-server and found that there are only events logs. If apm-server receives events, does it mean that frontend data has been received?

2020-04-06T07:05:49.491Z INFO [request] middleware/log_middleware.go:97 request accepted {"request_id": "947b65f4-4302-4abf-9858-ea138c81c4d1", "method": "POST", "URL": "/intake/v2/rum/events", "content_length": 1554, "remote_address": "192.168.51.210", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4100.0 Safari/537.36", "response_code": 202}
2020-04-06T07:05:49.966Z INFO [request] middleware/log_middleware.go:97 request accepted {"request_id": "f3b3778d-44f5-4422-81b2-2fa66b2b98e3", "method": "POST", "URL": "/intake/v2/rum/events", "content_length": 3182, "remote_address": "192.168.51.210", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36", "response_code": 202}
2020-04-06T07:05:51.458Z INFO [request] middleware/log_middleware.go:97 request accepted {"request_id": "4043ea75-c4f5-430f-803c-2a02a7f657c8", "method": "POST", "URL": "/intake/v2/rum/events", "content_length": 13633, "remote_address": "192.168.51.210", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4100.0 Safari/537.36", "response_code": 202}
2020-04-06T07:05:51.461Z INFO [request] middleware/log_middleware.go:97 request accepted {"request_id": "bfe9771e-a826-47fd-9da3-765db076c8ff", "method": "POST", "URL": "/intake/v2/rum/events", "content_length": 13042, "remote_address": "192.168.51.210", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4100.0 Safari/537.36", "response_code": 202}
2020-04-06T07:05:57.928Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "471e9a17-6795-4530-bb8a-ebf840d2f495", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:05:59.263Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "d3f154f8-9277-4f06-9e79-b9bdcdffdcd6", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:07.928Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "f0c8ce8a-37dd-4592-a86f-7d4cfaf29f66", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:09.263Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "befb02dc-5afb-4473-99e6-64ab94fc4595", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T06:41:42.782Z INFO [request] middleware/log_middleware.go:97 request accepted {"request_id": "a209092b-b05a-4be4-9328-a3bb89559a0d", "method": "POST", "URL": "/intake/v2/rum/events", "content_length": 2572, "remote_address": "192.168.51.210", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4100.0 Safari/537.36", "response_code": 202}2020-04-06T07:06:17.928Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "3212e3aa-af51-4d46-a844-98f23b988f25", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:19.263Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "da13de83-3e0e-4d2c-9cfe-af024685a9f8", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:27.928Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "28cc0b18-ae89-42cd-b179-3379241da983", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:29.263Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "2803caa3-2cc4-46b9-9b63-79d6c2b7ccbd", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:37.928Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "4cbb69f5-cd53-4c6a-a556-698bb9c38906", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:39.263Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "1a36348f-22f0-47be-8c52-203434bd00e0", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}
2020-04-06T07:06:47.928Z INFO [request] middleware/log_middleware.go:97 request ok {"request_id": "0781fb4c-173d-4834-9da1-333844e37bab", "method": "GET", "URL": "/", "content_length": 0, "remote_address": "192.168.51.212", "user-agent": "kube-probe/1.17", "response_code": 200}

@simitt

request-> ingress controller-> apm-server
Just generate backend transactions

nginx-ingress-rule

kubectl get ing apm-server-es-output -oyaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS, DELETE
nginx.ingress.kubernetes.io/cors-allow-origin: '*'
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
backend:
serviceName: apm-server-es-output
servicePort: 8200
rules:

  • host: apm.xxxx.com
    http:
    paths:
    • backend:
      serviceName: apm-server-es-output
      servicePort: 8200
      path: /

The above logs do indeed indicate that the APM Server successfully received events from the RUM agent. The APM Server then sends the requests asynchronously to Elasticsearch. If you do not see any error messages in the APM Server logs, it suggests that the data have been successfully ingested to Elasticsearch.

@simitt

I deleted the old data and found no new data in elasticsearch

When looking at your APM Server logs now, do you see incoming requests since you deleted the data (similar to what we discussed above)? If so, do you see any errors?

Please go to Kibana/dev tools and run following queries:

GET apm*/_search

Does this return any documents?

GET _cat/indices/apm*

Does this return APM indices?

@simitt
The elasticsearch data did not receive APM server information.

So to summarize - you ensured that the connection between the specific agent and the APM Server works as expected, by seeing incoming data in the APM Server logs (log lines that show "URL": "/intake/v2/rum/events"). You ensured that the incoming data you see are in fact sent from the one agent from which you are missing data in Elasticsearch. Despite no errors are logged in APM Server the data are not indexed into Elasticsearch.

This sounds quite uncommon, as usually the APM Server would log any errors if data cannot be ingested. Which modifications have you made to your apm-server.yml file? (In case you post details here please ensure to remove all sensitive information).

I prepared a new environment.

request >> ingress >> nginx-controller >> apm-server-clusterip >> apm-server

curl http://ingress:port
{
"build_date": "2020-02-28T22:18:38Z",
"build_sha": "3b7823bb329e0e5bfe25e106a3b93e8f61d0451f",
"version": "7.6.1"
}
I see "request accepted" in the log of apm-server,But elasticsearch did not find the data.

https://paste.ubuntu.com/p/XCYJxSpmr7/

What makes me even more strange is that apm-server does not display elasticsearch cluster connection information, such as which elasticsearch address is connected

According to the log files you have enabled file output (output.file.enabled: true) rather than Elasticsearch output, see log line fileout/file.go:98 Initialized file output. path=/usr/share/apm-server/data/apm-server max_size_bytes=10240000 max_backups=5 permissions=-rw-------.

@simitt
You said Initialized file output. Means that all data is output to disk? So it means that apmserver did not load apm-server.yml correctly?


I found a problem with apmserver deployed using helm.
I wrote a yaml, apmserver successfully ran, and I saw new data on kibana.

But there are new problems, apm-index data appears in es, but the kibana app-apm page is empty


apm-server.yml
https://paste.ubuntu.com/p/dsrdSRXRct/