Expired Certificate (agent / fleet)

First - this is all about a private, protected and insecure test environment!

I decided to install agents with the --insecure option and since some days i found out that there's no more data received in the server.
Fleet is showing degraded agents but the seem to be able to communicate with the server (timestamp for last communication).
As far as i can see in the agent logs there's an outdated certificate:

{"log.level":"error","@timestamp":"2025-03-26T04:39:35.126Z","message":"Failed to connect to backoff(elasticsearch(https://192.168.0.97:9200)): Get \"https://192.168.0.97:9200\": x509: certificate has expired or is not yet valid: current time 2025-03-26T05:38:51+01:00 is after 2025-03-23T16:00:16Z","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"system/metrics-default","type":"system/metrics"},"log":{"source":"system/metrics-default"},"log.logger":"publisher_pipeline_output","log.origin":{"file.line":149,"file.name":"pipeline/client_worker.go","function":"github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*netClientWorker).run"},"service.name":"metricbeat","ecs.version":"1.6.0","ecs.version":"1.6.0"}

It does not seem that simple to renew the certificate and i found some issues on github but not really a solution.

Also i wonder why there was no info or warning in the UI about the upcoming certificate expiration?

Is there a simple solution to renew the default certificate without the need to re-enroll all clients?

Are the agents able to communicate with the Fleet Server?

The --insecure applies only to the Fleet Server communications, not to the outputs.

If the agent is able to communicate with the Fleet Server you may solve this by changing the certificate for the outputs and adding it to the policy or maybe even telling the agent to ignore the certificate, which is also possible.

Do you need to renew the CA as well? If so, no, you need to re-enroll all agents.

@leanbdrojmp, thank you.
Best would be to ignore the certificate - how is this done?

What is really confusing me - i checked the "notAfter" for 3 certificates:
None of them is invalid?

Http.p12
notAfter=Mar 23 16:00:08 2026 GMT

Transport.p12
notAfter=Feb 28 16:00:00 2122 GMT

Http_ca.crt
notAfter=Mar 23 16:00:08 2026 GMT

I thought it might be a good idea to update and reboot the server.
Well - as a result the server seems not to be able to start.
I can not access it any longer.

So now i will start to investigate the log files...

well - out of the kibana log i can see that the certificate is expired so kibana does not start.

|path|certs/http.p12|
|---|---|
|format|PKCS12|
|alias|http|
|subject_dn|CN=kali-xxxxx.xxxxxx.local|
|serial_number|8a8eabc03xxxxxxxxxxxxxb3535de27144fbb40|
|has_private_key|true|
|expiry|**2025-03-23**T16:00:16.000Z|
|issuer|CN=Elasticsearch security auto-configuration HTTP CA|

Now i am really confused.

/etc/elasticsearch/certs/http.p12 is expiring

notAfter=Mar 23 16:00:08 2026 GMT

But out of the get _ssl/certificate i can see there's more than one certificate in the http.p12.

So i will now investigate this further.

Why does this all have to be so complicated?

The agents will use the certificates on Elasticsearch, not the certificates in Kibana.

What are the dates for the http certificate in Elastisearch? Both the start and after date?

The error you shared is related to Elasticsearch, not Fleet.

Can you run this in Kibana Dev Tools and share the results?

GET _ssl/certificates

Sorry @ leandrojmp. Guess i am short before getting crazy with this as i am missing a lot info here.
In the mentioned _ssl/certificates i found one that seems to be outdated:

|||
|---|---|
|path|certs/http.p12|
|format|PKCS12|
|alias|http|
|subject_dn|CN=xxxxxxx.xxxxxxx.local|
|serial_number|8a8eaxxxxxxxxxxxxxxxxx6a7b3535de27144fbb40|
|has_private_key|true|
|expiry|2025-03-23T16:00:16.000Z|
|issuer|CN=Elasticsearch security auto-configuration HTTP CA|

So now i identified the file to be

/etc/elasticsearch/certs/http.p12

I now used the content of Update certificates with the same CA | Elasticsearch Guide [7.17] | Elastic trying to see if this will fix the issue

This seems to be the certificate used by the http layer in Elasticsearch.

You can confirm it by looking on elasticsearch.yml and checking if xpack.security.http.ssl.certificate points to the same file.

If the CA has expired you will need to upgrade the CA as well.

coming forward in small steps...

kibana problem could be solved with "elasticsearch.ssl.verificationMode: none".

This made kibana gui available again.

But still there's a missmatch between the certificates:

/opt/Elastic/Agent/elastic-agent status
┌─ fleet
│  └─ status: (STARTING)
└─ elastic-agent
   ├─ status: (DEGRADED) 1 or more components/units in a failed state
   └─ fleet-server-default
      ├─ status: (HEALTHY) Healthy: communicating with pid '1360'
      ├─ fleet-server-default
      │  └─ status: (FAILED) Error - failed version compatibility check with elasticsearch: x509: certificate signed by unknown authority
      └─ fleet-server-default-fleet-server-fleet_server-bad6ee92-babd-4f47-a612-eb78cb0f27ea
         └─ status: (FAILED) Error - failed version compatibility check with elasticsearch: x509: certificate signed by unknown authority

It's late now and my head is overloaded.

I used

./elasticsearch-certutil ca
./elasticsearch-certutil http

To generate a fresh set of certificates but right now i do not know what of the result to put into what location.

Guess it's best to start again tomorrow and...

i rolled back all certificates from backup and only created a new http certificate from the existing ca. all certificates now have a valid date. also the fingerprint is matching.

Still the system is not comming back as the fleet server has a problem:

fleet-server-default
      ├─ status: (HEALTHY) Healthy: communicating with pid '3825824'
      ├─ fleet-server-default
      │  └─ status: (FAILED) Error - failed version compatibility check with elasticsearch: x509: certificate signed by unknown authority
      └─ fleet-server-default-fleet-server-fleet_server-bad6ee92-babd-4f47-a612-eb78cb0f27ea
         └─ status: (FAILED) Error - failed version compatibility check with elasticsearch: x509: certificate signed by unknown authority

This i wonder about as the agent had been installed with --insecure?

Seems like the setting got lost.

well - i tested some of the xpack config settings in elasticsearch.yml.

Which one is relevant for the "agent security"?

So - finally i was able to fix it myself.

openssl pkcs12 -in elastic-stack-ca.p12 -clcerts -nokeys -out elastic-cert.crt
cp elastic-cert.crt /usr/local/share/ca-certificates
update-ca-certificates

Now agent and fleet look mutch better

 /opt/Elastic/Agent/elastic-agent status
┌─ fleet
│  └─ status: (HEALTHY) Connected
└─ elastic-agent
   └─ status: (HEALTHY) Running

Thanks to all trying to help.

ADDENDUM

Just to complete the solution. I had to communicate the cert as my Elastic Defend Integration did not work. So i exported the fingerprint and published it with setting in kibana. At the end i did not have to include the cert into local cert storage.

openssl x509 -in elastic-cert.crt -noout -fingerprint -sha256
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://192.168.0.97:9200'], ca_trusted_fingerprint: cc5f74xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxbcfa49}]