Ssl certificate error with APM using fleet

If you are asking about a problem you are experiencing, please use the following template, as it will help us help you. If you have a different problem, please delete all of this text :slight_smile:

TIP 1: select at least one tag that further categorizes your topic. For example server for APM Server related questions, java for questions regarding the Elastic APM Java agent, or ui for questions about the APM App within Kibana.

TIP 2: Check out the troubleshooting guide first. Not only will it help you to resolve common problems faster but it also explains in more detail which information we need before we can properly help you.

Kibana version: 8.2
Elasticsearch version: 8.2

APM Server version: 8.2

APM Agent language and version: nodejs

Browser version:

Original install method (e.g. download page, yum, deb, from source, etc.) and version:
Installed APM using Fleet server and elastic agent

Fresh install or upgraded from other version?
New install

Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.
Not using load balancer
Not change index pattern

Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
I have configured APM setup on server side with following details :

./elastic-agent install --url=https://gnbsx20637.xx.yy.com:8220 --fleet-server-es=https://gnbsx20637.xx.yy.com:9200  --fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2NjAwNDQ5MzMwNzA6X1ZjcmVoOGNTUGVuVkdPZG8tdFRYZw --fleet-server-policy=fsp3 --certificate-authorities=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/ca/ca.crt --fleet-server-es-ca=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/elasticsearch-ca.pem --fleet-server-cert=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.crt --fleet-server-cert-key=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.key
Elastic Agent will be installed at /opt/Elastic/Agent and will run as a service. Do you want to continue? [Y/n]:Y
{"log.level":"info","@timestamp":"2022-08-09T13:56:51.653+0200","log.origin":{"file.name":"cmd/enroll_cmd.go","file.line":783},"message":"Fleet Server - Starting","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-09T13:56:53.654+0200","log.origin":{"file.name":"cmd/enroll_cmd.go","file.line":764},"message":"Fleet Server - Running on policy with Fleet Server integration: fsp3; missing config fleet.agent.id (expected during bootstrap process)","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-09T13:56:54.226+0200","log.origin":{"file.name":"cmd/enroll_cmd.go","file.line":461},"message":"Starting enrollment to URL: https://gnbsx20637.xx.yy.com:8220/","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-09T13:56:55.006+0200","log.origin":{"file.name":"cmd/enroll_cmd.go","file.line":261},"message":"Successfully triggered restart on running Elastic Agent.","ecs.version":"1.6.0"}
Successfully enrolled the Elastic Agent.
Elastic Agent has been successfully installed.

however in the apm server log file : /opt/Elastic/Agent/data/elastic-agent-b9a28a/logs/default/apm-server-20220810-5.ndjson, it is showing following error :

*{"log.level":"error","@timestamp":"2022-08-10T05:57:26.444+0200","log.logger":"beater","log.origin":{"file.name":"beater/waitready.go","file.line":64},"message":"precondition failed: x509: certificate signed b*
*y unknown authority","service.name":"apm-server","ecs.version":"1.6.0"}*

The fleet.yml is as follows :

*agent:
  id: 2be2c3e4-6c5c-4233-80a5-d0c27649ed19
  monitoring.http:
    enabled: false
    host: ""
    port: 6791
    buffer: null
fleet:
  enabled: true
  access_api_key: Ry1Pamc0SUJGV0YzanFETzNYelI6UkNDNjVUOUVUYzJxakNDRVlOTXMtdw==
  protocol: https
  host: gnbsx20637.xx.yy.com:8220
  ssl:
    verification_mode: full
    certificate_authorities:
    - /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/ca/ca.crt
    renegotiation: never
  timeout: 10m0s
  proxy_disable: true
  reporting:
    threshold: 10000
    check_frequency_sec: 30
  agent:
    id: ""
  server:
    policy:
      id: fsp3
    output:
      elasticsearch:
        protocol: https
        hosts:
        - gnbsx20637.xx.yy.com:9200
        service_token: AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2NjAwMzg4NjU1Mjc6YnRjOHQ2Mm1RVUNINE9WcjlIcFRFZw
        ssl:
          verification_mode: full
          certificate_authorities:
          - /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/elasticsearch-ca.pem
          renegotiation: never
        proxy_disable: false
        proxy_headers: {}
    host: 0.0.0.0
    port: 8220
    internal_port: 8221
    ssl:
      verification_mode: full
      certificate: /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.crt
      key: /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.key
      renegotiation: never*

I tried connecting the elasticsearch server without apm using the certificate and it is not working without username and passwd :

*curl --cacert/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/elasticsearch-ca.pem https://gnbsx20637.xx.yy.com:9200
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}},"status":401}-bash-4.2$ client_loop: send disconnect: Connection reset*

When curl run with username and passwd, it works :

*-bash-4.2$ curl --user elastic --pass xxxx --cacert /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/elasticsearch-ca.pem https://gnbsx20637.xx.yy.com:9200
Enter host password for user 'elastic':
{
  "name" : "node-3-gnbsx20637",
  "cluster_name" : "cad-elasticsearch-qa",
  "cluster_uuid" : "HW8-XdM1Rgig3Mxa1b85Mw",
  "version" : {
    "number" : "8.2.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "b174af62e8dd9f4ac4d25875e9381ffe2b9282c5",
    "build_date" : "2022-04-20T10:35:10.180408517Z",
    "build_snapshot" : false,
    "lucene_version" : "9.1.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
-bash-4.2$*

Steps to reproduce:
1.
2.
3.

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):

Can you please help understanding where the issue is?

The elasticsearch-ca.pem certificate is the one that my kibana used to connect to es and was generated using : ./bin/elasticsearch-certutil http

unzip ../elasticsearch-ssl-http.zip 
									Archive:  ../elasticsearch-ssl-http.zip
									   creating: elasticsearch/
									   creating: elasticsearch/node-1-gnbsx20635/
									  inflating: elasticsearch/node-1-gnbsx20635/README.txt  
									  inflating: elasticsearch/node-1-gnbsx20635/http.p12  
									  inflating: elasticsearch/node-1-gnbsx20635/sample-elasticsearch.yml  
									   creating: elasticsearch/node-2-gnbsx20636/
									  inflating: elasticsearch/node-2-gnbsx20636/README.txt  
									  inflating: elasticsearch/node-2-gnbsx206036/http.p12  
									  inflating: elasticsearch/node-2-gnbsx20636/sample-elasticsearch.yml  
									   creating: elasticsearch/node-3-gnbsx20637/
									  inflating: elasticsearch/node-3-gnbsx20637/README.txt  
									  inflating: elasticsearch/node-3-gnbsx20637/http.p12  
									  inflating: elasticsearch/node-3-gnbsx20637/sample-elasticsearch.yml  
									   creating: kibana/
									  inflating: kibana/README.txt       
									  inflating: kibana/elasticsearch-ca.pem  
									  inflating: kibana/sample-kibana.yml  

I am using self signed certificates for on-prem setup here.

The fleet.server.output.elasticsearch configuration in fleet.yml is only used by Fleet Server, and not by other Elastic Agent integrations such as APM Server.

To configure APM Server's Elasticsearch TLS CA certs, please follow these docs:

Hello Andrew,

Thanks for your response.

I am bit confused here. My objective was to setup APM for my application with node.js integration & then using open-telemetry collector in my final integration but i am stuck at the first stage itself within elastic-apm.
I started followed the documentation with architecture at this link :

https://www.elastic.co/guide/en/apm/guide/8.2/apm-components.html

Then used the following steps to configure apm using fleet using self managed:

https://www.elastic.co/guide/en/apm/guide/8.2/apm-quick-start.html

Then at apm-agent side, using node.js config at step 4 of apm-quick-start.html with following config :

const apm = require('elastic-apm-node').start({
serviceName: 'apm-server',
 secretToken: '',
apiKey: '',
serverUrl: 'https://gnbsx20637.xx.yy.com:8220',
serverCaCertFile: 'ca.crt', 
})

getting this error :

APM Server transport error: unexpected status from APM Server information endpoint: 404"}

That means in the first hyper link of documentation, the fleet server is what i am missing in architecture and now from your response, my understanding is that data flows from
apm-agent => elastics-agent => fleet server that also includes elastic-agent => elasticsearch

Is that right?

Now coming to the point that you highlighted about fleet.yml : The fleet .yml was automatically generated with the command while following steps in doc " Quick start | APM User Guide [8.2] | Elastic"

./elastic-agent install --url=https://gnbsx20637.xx.yy.com:8220 --fleet-server-es=https://gnbsx20637.xx.yy.com:9200  --fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2NjAwNDQ5MzMwNzA6X1ZjcmVoOGNTUGVuVkdPZG8tdFRYZw --fleet-server-policy=fsp3 --certificate-authorities=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/ca/ca.crt --fleet-server-es-ca=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/elasticsearch-ca.pem --fleet-server-cert=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.crt --fleet-server-cert-key=/data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.key

Please help connecting these scattered dots !

That means in the first hyper link of documentation, the fleet server is what i am missing in architecture and now from your response, my understanding is that data flows from
apm-agent => elastics-agent => fleet server that also includes elastic-agent => elasticsearch

Apologies for the confusing docs! I'll try to clarify what each of the relevant components is and does -- with the exception of Kibana and Elasticsearch, as I expect that's clear.

  • APM Agents are libraries that you add to your application to instrument them. They send data to APM Server.
  • APM Server receives data from APM Agents over port 8200, and writes them directly to Elasticsearch.
  • Elastic Agent is a sort of supervisor process, which runs APM Server. It connects to Fleet Server to receive configuration defined in Kibana, in the Fleet app, and passes these on to APM Server (or whatever integration it is running). It also does things like feeds agent status back to Fleet Server, for displaying in Kibana.
  • Fleet Server connects to Elasticsearch to receive configuration defined in Kibana, and makes it available to Elastic Agents over port 8220 (not 8200!).

You can run APM Server without Elastic Agent and Fleet Server -- this is what is referred to as "standalone (legacy) mode" at Components and documentation | APM User Guide [8.2] | Elastic. I'll assume you do want to run it with Elastic Agent though.

If you're running Fleet Server and APM Server on the same host, you would just have one Elastic Agent, installed like you did. You might alternatively run Fleet Server on a dedicated host, and a another Elastic Agent dedicated for the APM integration on another host. Either way, the --fleet-server-* flags you pass into elastic-agent install are only relevant to Fleet Server, and not to any other components run by Elastic Agent. For any other integration run by Elastic Agent, you will need to set TLS CA certs etc. as described at Configure SSL/TLS for self-managed Fleet Servers | Fleet and Elastic Agent Guide [8.11] | Elastic

Ultimately, the flow of data will be: apm-agent => apm-server => elasticsearch.

Let me know if that helps.

Many Thanks Andrew for those component clarifications given! It's now taking good shape for me. Since Elastic in it's documentation is giving alert about deprication of legacy APM, i prefer to stay on elastic-agent+ APM setup rather than legacy APM

From the link :

I was missing this below step in fleet config and after that the error disappear:

ssl.certificate_authorities: ["/path/to/your/elasticsearch-ca.crt"]

However still my apm-server under elastic-agent is not running. It's showing port closed using nmap command :

nmap -p 8200 gnbsx20637.xx.yy.com

Starting Nmap 5.51 ( http://nmap.org ) at 2022-08-12 13:58 IST
Nmap scan report for gnbsx20637.xx.yy.com (xxxx.xxx.xxx.xxx)
Host is up (0.13s latency).
PORT STATE SERVICE
8200/tcp closed trivnet1

Nmap done: 1 IP address (1 host up) scanned in 0.55 seconds

The log of apm-server, heartbeat are showing errors
/opt/Elastic/Agent/data/elastic-agent-b9a28a/logs/default/apm-server-20220812-5.ndjson

{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"beater/server.go","file.line":233},"message":"Starting apm-server [14db2276c75623e94735f1993d5b1e
b4c5ed2036 built 2022-04-20 06:21:50 +0000 UTC]. Hit CTRL-C to stop it.","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":112},"message":"Stop listening on: localhost:8200","service.name":"ap
m-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":88},"message":"RUM endpoints enabled!","service.name":"apm-server","e
cs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":91},"message":"CORS related setting `apm-server.rum.allow_origins` al
lows all origins. Consider more restrictive setting for production use.","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":106},"message":"SSL disabled.","service.name":"apm-server","ecs.versi
on":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"beater/server.go","file.line":249},"message":"Server stopped","service.name":"apm-server","ecs.ve
rsion":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"apm-server/main.go","file.line":188},"message":"transaction metrics aggregation stopped","service
.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"apm-server/main.go","file.line":188},"message":"service destinations aggregation stopped","servic
e.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"beater","log.origin":{"file.name":"beater/beater.go","file.line":329},"message":"context canceled","service.name":"apm-server","ecs
.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.863+0200","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":105},"message":"elasticsearch url: https://10.129.211
.113:9200","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"beater","log.origin":{"file.name":"apm-server/main.go","file.line":77},"message":"creating service destinations aggregation with con
fig: {Interval:1m0s MaxGroups:10000}","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path / added to request handler","service.name":"apm-ser
ver","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path /config/v1/agents added to request handler","servic
e.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path /config/v1/rum/agents added to request handler","se
rvice.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path /intake/v2/rum/events added to request handler","se
rvice.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path /intake/v3/rum/events added to request handler","se
rvice.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path /intake/v2/events added to request handler","servic
e.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path /intake/v2/profile added to request handler","servi
ce.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"handler","log.origin":{"file.name":"api/mux.go","file.line":130},"message":"Path /firehose added to request handler","service.name":
"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"beater","log.origin":{"file.name":"beater/server.go","file.line":233},"message":"Starting apm-server [14db2276c75623e94735f1993d5b1e
b4c5ed2036 built 2022-04-20 06:21:50 +0000 UTC]. Hit CTRL-C to stop it.","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":88},"message":"RUM endpoints enabled!","service.name":"apm-server","e
cs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":91},"message":"CORS related setting `apm-server.rum.allow_origins` al
lows all origins. Consider more restrictive setting for production use.","service.name":"apm-server","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:05:13.865+0200","log.logger":"beater","log.origin":{"file.name":"beater/http.go","file.line":106},"message":"SSL disabled.","service.name":"apm-server","ecs.versi
on":"1.6.0"}

The heartbeat log is also showing below errors :


{"log.level":"error","@timestamp":"2022-08-12T10:34:19.260+0200","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to b
ackoff(elasticsearch(http://localhost:9200)): Get \"http://localhost:9200\": dial tcp 127.0.0.1:9200: connect: connection refused","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:34:19.260+0200","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":141},"message":"Attempting to reconnect
 to backoff(elasticsearch(http://localhost:9200)) with 42 reconnect attempt(s)","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-08-12T10:34:19.260+0200","log.logger":"esclientleg","log.origin":{"file.name":"transport/logging.go","file.line":37},"message":"Error dialing dial tcp 127.0.0.1:9200: connect: connection refused","service.name":"heartbeat","network":"tcp","address":"localhost:9200","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:34:43.049+0200","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":184},"message":"Non-zero metrics in the last 30s","service.name":"hea
rtbeat","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":790,"time":{"ms":20}},"total":{"ticks":1650,"time":{"ms":40},"value":0},"user":{"ticks":860,"time":{"ms":20}}},"handles":{"limit":{"hard":4096,"
soft":1024},"open":14},"info":{"ephemeral_id":"43b6cc9b-b62d-4b42-ac27-fa2aae68ccea","uptime":{"ms":1770022},"version":"8.2.0"},"memstats":{"gc_next":11631920,"memory_alloc":6976520,"memory_total":154071784,"rs
s":97738752},"runtime":{"goroutines":35}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":10,"retry":1}}},"system":{"load":{"1":0.07,"15"
:0.09,"5":0.07,"norm":{"1":0.0117,"15":0.015,"5":0.0117}}}},"ecs.version":"1.6.0"}}
{"log.level":"error","@timestamp":"2022-08-12T10:34:57.551+0200","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to b
ackoff(elasticsearch(http://localhost:9200)): Get \"http://localhost:9200\": dial tcp 127.0.0.1:9200: connect: connection refused","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:34:57.551+0200","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":141},"message":"Attempting to reconnect
 to backoff(elasticsearch(http://localhost:9200)) with 43 reconnect attempt(s)","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2022-08-12T10:34:57.551+0200","log.logger":"esclientleg","log.origin":{"file.name":"transport/logging.go","file.line":37},"message":"Error dialing dial tcp 127.0.0.1:9200: con
nect: connection refused","service.name":"heartbeat","network":"tcp","address":"localhost:9200","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-12T10:35:13.049+0200","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":184},"message":"Non-zero metrics in the last 30s","service.name":"hea
rtbeat","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":800,"time":{"ms":10}},"total":{"ticks":1670,"time":{"ms":20},"value":0},"user":{"ticks":870,"time":{"ms":10}}},"handles":{"limit":{"hard":4096,"
soft":1024},"open":14},"info":{"ephemeral_id":"43b6cc9b-b62d-4b42-ac27-fa2aae68ccea","uptime":{"ms":1800022},"version":"8.2.0"},"memstats":{"gc_next":11631920,"memory_alloc":9217992,"memory_total":156313256,"rs
s":97738752},"runtime":{"goroutines":35}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":10,"retry":1}}},"system":{"load":{"1":0.04,"15"
:0.09,"5":0.06,"norm":{"1":0.0067,"15":0.015,"5":0.01}}}},"ecs.version":"1.6.0"}}

In my setup there is 3 node Elasticsearch cluster ( gnbsx20635, gnbsx20636, gnbsx20637) and on 3rd node i.e. gnbsx20637 , i have setup kibana as well as APM + fleet all on same host. Also to mention that each node has 2 IP addresses ( private & public ). Private is used for internode communication and public is used for external connection.

Can you please suggest what i am missing here?

Great, progress! :tada:

Can you please suggest what i am missing here?

Looking at the APM Server log output, it appears to be listening only on localhost -- this is the default configuration in the APM integration policy.

You should go into the Fleet app in Kibana and edit the APM integration policy, setting the Host configuration to :8200. That will listen on all network interfaces. Alternatively, set it to private_ip:8200 if you only want to receive data on the private network.

I am unable to open Fleet APM integration policies with error

Error loading data
There was an error loading this integration information

I tried to change the apm_server.yml from /opt/Elastic/Agent/data/elastic-agent-b9a28a/install/apm-server-8.2.0-linux-x86_64/apm_server.yml

change the localhost to IP address of elasticsearch host

and same is done in heartbeat install but this did not help.

As you can see in the apmserver log that i pasted above saying SSL disabled. However i have used ssl. There seems to be a gap here.

here is my elastic-agent status :

 /usr/bin/elastic-agent status
Status: HEALTHY
Message: (no message)
Applications:
  * apm-server             (HEALTHY)
                           Running
  * fleet-server           (HEALTHY)
                           Running on policy with Fleet Server integration: fsp3
  * heartbeat              (CONFIGURING)
                           Updating configuration
  * filebeat_monitoring    (HEALTHY)
                           Running
  * metricbeat_monitoring  (HEALTHY)
                           Running

My current fleet.yml

agent:
  id: 88499264-0f06-4a5f-987f-162ee4affaf6
  headers: {}
  logging.level: info
  monitoring.http:
    enabled: false
    host: ""
    port: 6791
    buffer: null
fleet:
  access_api_key: emxVUWtZSUI0YmVoLTl1M2NZUXQ6c2tWQTBUbVhUZk9TTTBxLXRGR2ZFQQ==
  agent:
    id: ""
  enabled: true
  host: 10.129.211.115:8220
  protocol: https
  proxy_disable: true
  reporting:
    check_frequency_sec: 30
    threshold: 10000
  server:
    host: 0.0.0.0
    internal_port: 8221
    output:
      elasticsearch:
        hosts:
        - 10.129.211.113:9200
        protocol: https
        proxy_disable: false
        proxy_headers: null
        service_token: AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2NjAwNDQ5MzMwNzA6X1ZjcmVoOGNTUGVuVkdPZG8tdFRYZw
        ssl:
          certificate_authorities:
          - /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/elasticsearch-ca.pem
          renegotiation: never
          verification_mode: full
    policy:
      id: fsp3
    port: 8220
    ssl:
      certificate: /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.crt
      key: /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/fleet-server/fleet-server.key
      renegotiation: never
      verification_mode: full
  ssl:
    certificate_authorities:
    - /data/essw/elastic-agent-8.2.0-linux-x86_64/fleet-secure/ca/ca.crt
    renegotiation: never
    verification_mode: full
  timeout: 10m0s

apm_server.yml

apm-server:
  host: "10.129.211.115:8200"

output.elasticsearch:
  hosts: ["10.129.211.113:9200"]

heartbeat.yml

heartbeat.config.monitors:
  path: ${path.config}/monitors.d/*.yml
  reload.enabled: false
  reload.period: 5s

heartbeat.monitors:
- type: http
  enabled: false
  id: my-monitor
  name: My Monitor
  urls: ["http://10.129.211.113:9200"]
  schedule: '@every 10s'

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression

setup.kibana:

output.elasticsearch:
  hosts: ["10.129.211.113:9200"]

I am unable to open Fleet APM integration policies with error

I haven't seen that error before. Not being able to edit the APM policy is a bit of a showstopper, so we'll need to get someone with more Fleet knowledge to chime in.

I tried to change the apm_server.yml from /opt/Elastic/Agent/data/elastic-agent-b9a28a/install/apm-server-8.2.0-linux-x86_64/apm_server.yml

change the localhost to IP address of elasticsearch host

This won't work, as APM Server receives its configuration from Fleet.

As you can see in the apmserver log that i pasted above saying SSL disabled. However i have used ssl. There seems to be a gap here.

With the exception of the certificate files themselves, APM Server's TLS configuration is also received from Fleet. This needs to be configured in the APM policy:

Thanks Andrew for your response. Indeed i don't see much information about APM+Fleet type of configurations from discussion forums. Most information in discussion forums are around legacy APM config rather than fleet. Since fleet is future, so i adopted this approach but i am stuck here. For sure getting any exert on fleet would be a plus.

I missed to mention one point related to the snapshot of integrations kIbana page that you pasted above. My setup is behind internet proxy and i used the following in kibana.yml to bypass the package download as this was suggested in elastic github for cases where package download is not permissible from epr.elastic.co despite of using http_proxy and other internet proxy bypass methods.

I suppose this should not be a blocking ?.

xpack.fleet.packages:
  - name: apm
    version: latest
  - name: synthetics
    version: latest

xpack.fleet.agentPolicies:
  - name: fsp3
    description: A preconfigured policy that contains all bundled packages - for testing!
    id: fsp3

    is_default: true
    is_default_fleet_server: true

    namespace: default

    package_policies:
      - name: apm-1
        id: apm-1
        package:
          name: apm
      - name: synthetics-1
        id: synthetics-1
        package:
          name: synthetics
      - name: elastic_agent-1
        id: elastic_agent-1
        package:
          name: elastic_agent
      - name: fleet_server-1
        id: fleet_server-1
        package:
          name: fleet_server
 

@msk_76 ah, that might explain it. There are some known issues around the Fleet UI when the package registry is inaccessible.

Given that you're preconfiguring integration packages, you could also set the required configuration there. Try modifying the "apm-1" bit under package_policies to look like this:

- name: apm-1
  id: apm-1
  package:
    name: apm
    inputs:
      - type: apm
        keep_enabled: true
        vars:
          - name: host
            value: "0.0.0.0:8200"
          - name: tls_enabled
            value: true
          - name: tls_certificate
            value: /path/to/apm-server.crt
          - name: tls_key
            value: /path/to/apm-server.key
          - name: secret_token
            value: <your_secret_token>
          - name: api_key_enabled
            value: true

Thanks again. With your new inputs in kibana.yml, it has moved a bit forward but it is still throwing package errors while browsing integrations :

Some errors in kibana log :

@msk_76 sorry, to be clear: the changes I suggested above won't help with the issues with Fleet not working well when EPR is inaccessible. They should allow you to get APM Server configured though.

If you haven't already seen this, you might also want to configure Kibana to connect to EPR through your proxy: Air-gapped environments | Fleet and Elastic Agent Guide [8.3] | Elastic

Thanks again. I did study airgap env for download but unfortunately it does not support proxy gateway with username/password authentication and our proxy needs user+pass to pass through it. That was the reason looking for alternatives.

Thanks Andrew,
I am back again as i managed to bypass my internet proxy and can see the integrations inside kibana. Keeping the TLS disabled, my nodejs APM client is unable to reach to apm server. To avoid TLS complexity, i have kept the anonymous settings in APM elastic integration in kibana.

Did i miss something here?

error (ECONNREFUSED): connect ECONNREFUSED gnbsx20637.gnb.st.com:8200

@msk_76 by default the APM integration listens only on localhost, as there's no authentication enabled by default. If you change the Host configuration to "0.0.0.0:8200", then APM Server will start listening on all network interfaces. You may then want to configure either secret token or API Key authentication, particularly if your APM Server host is accessible over untrusted networks.

Thanks for your response. I changed the Host configuration to : 0.0.0.0:8200

At the APM-agent nodejs config side, it gives me following. My APM-agent is on a different host than elastic-agent for APM server and i suppose, i need to change the localhost with the IP address of the server?

serverUrl http://localhost:8200 

image

You suppose correctly :slight_smile:

I think that configuration snippet is taking the localhost value from the "URL" setting in the APM integration policy. It's only used for informational purposes like this - only the "Host" setting has a functional purpose.

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.