Connection between Elastic Agent (on local machine) and Elasticsearch (on GCP)

I am working on a study project to collect log from many machines. I have 2 VMs on GCP, one is for a Elasticsearch/Kibana server, one is for a fleet server. Both are in the same VPC. I have one Windows Server VM on my local machine. Elastic Agents are install on Windows Server and on Fleet Server. VMs on GCP don't manage their own external ip, they are bound to their internal ip (running command 'ip a' dont show external ip). External ip is managed by GCP (that was what I learned).

The problem is: I have to use Elasticsearch internal ip in configuration files, for example in Elasticsearch.yml file, network.host is set to 10.128.0.3 (internal ip). When setting up fleet server, and adding elastic agent to other vms. The data is set to send to the internal ip of Elasticsearch. For the Fleet Server, it is ok because it is within the same VPC as the Elasticsearch server. But the Windows Server VM is not.

What I have tried: I change the data output to use the public ip of Elasticsearch server. But the SSL cert dont include this ip, causing agents to refuse to send data. I was thinking of creating a cert that include this ip. But it was too much work.

Hi @Nghia_Vu Welcome to the community

Well that is the correct approach :slight_smile:

Since this is just a study project you could enroll the Agent with the --insecure flag so certificate validation is not performed. Perhaps try that

--insecure

Allow the Elastic Agent to connect to Fleet Server over insecure connections. This setting is required in the following situations:

  • When connecting to an HTTP server. The API keys are sent in clear text.
  • When connecting to an HTTPs server and the certificate chain cannot be verified. The content is encrypted, but the certificate is not verified.
  • When using self-signed certificates generated by Elastic Agent.

We strongly recommend that you use a secure connection.

You could also look through this thread

So the steps to make this work are:

  • step 1: installing elasticsearch and kibana as normal
  • step 2: generate token using binary elasticsearch-create-enrollment-token. Then set up elastic via Kibana UI
  • step 3: generate a new certificate that include both the internal ip and external ip of elasticsearch (use binary elasticsearch-certutil)
  • step 4: modify the elasticsearch/kibana configuration yaml file to use the new certificate.
  • step 5: add the fleet server and create elastic agent on the windows machine (the enrollment should use --insecure flag).
  • step 6: modify kibana config file under output section to use elasticsearch external ip.

Is this correct?

if possible please help me with generating new cert that include the external ip. And where to make changes to use this new cert

Hi @Nghia_Vu

There is a good example here, you use an instances.yml file and put what you need in there to generate the certs.

I have managed to setup both Elasticsearch/Kibana with self-signed cert by following these:
Different CA | Elastic Docs
elasticsearch-certutil | Reference

With Elasticsearch/Kibana up and running, I then created a new cert for the fleet server. I transferred fleet cert, and the ca cert to the fleet server vm. I added/enrolled the fleet server with this command:

curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-9.0.3-linux-x86_64.tar.gz
tar xzvf elastic-agent-9.0.3-linux-x86_64.tar.gz
cd elastic-agent-9.0.3-linux-x86_64
sudo ./elastic-agent install --url=https://34.55.72.62:8220 \
  --fleet-server-es=https://34.172.16.38:9200 \
  --fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE3NTI4NjM3ODIwMDQ6YkVLak9oNEVSZkdYVlBWbFlKMnoyQQ \
  --fleet-server-policy=fleet-server-policy \
  --certificate-authorities=/etc/myfleet/certs/ca/ca.crt \
  --fleet-server-es-ca=/etc/myfleet/certs/ca/ca.crt \
  --fleet-server-cert=/etc/myfleet/certs/fleet/fleet.crt \
  --fleet-server-cert-key=/etc/myfleet/certs/fleet/fleet.key \
  --fleet-server-port=8220 \
  --install-servers \
  --insecure

The fleet server is now successfully enrolled, the agent on the fleet is also running healthily


However, no data was sent to Elasticsearch. Checking the log of the Agent on the fleet server show this message:

{"log.level":"error","@timestamp":"2025-07-19T07:36:09.064Z","message":"Failed to connect to backoff(elasticsearch(https://34.172.16.38:9200)): Get \"https://34.172.16.38:9200\": x509: certificate signed by unknown authority","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"http/metrics-monitoring","type":"http/metrics"},"log":{"source":"http/metrics-monitoring"},"ecs.version":"1.6.0","log.logger":"publisher_pipeline_output","log.origin":{"file.line":149,"file.name":"pipeline/client_worker.go","function":"github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*netClientWorker).run"},"service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2025-07-19T07:36:09.064Z","message":"Attempting to reconnect to backoff(elasticsearch(https://34.172.16.38:9200)) with 894 reconnect attempt(s)","component":{"binary":"metricbeat","dataset":"elastic_agent.metricbeat","id":"http/metrics-monitoring","type":"http/metrics"},"log":{"source":"http/metrics-monitoring"},"service.name":"metricbeat","ecs.version":"1.6.0","log.logger":"publisher_pipeline_output","log.origin":{"file.line":140,"file.name":"pipeline/client_worker.go","function":"github.com/elastic/beats/v7/libbeat/publisher/pipeline.(*netClientWorker).run"},"ecs.version":"1.6.0"}

Why does it still say that 'certificate is signed by unknown authority', even though I included --insecure flag in the agent installation?

The --insecure flag only applies to the connection with the Fleet Server, Elastic Agent has 2 communication flows, one to Fleet Server for health check and get configuration changes and one with the configured output, in this case Elastisearch, to send the data.

In this case you will need to edit the output in Fleet Settings and add the certificate authority, so it can trust the the certificate.

Depending on the goals of your research project you might want to check out Elasticsearch Serverless. With the Security solution in Serverless we charge per GB of data ingested and retained -- there are no other charges and there is a two week trial available Elastic Cloud Serverless Pricing for Elastic Security | Elastic

It may be far less expensive for you to run your project using Serverless than running VMs on GCP. In serverless we manage everything for you, including Fleet, so you can just deploy Agents and begin investigating.

Something to consider :slight_smile:

1 Like

I am learning cybersecurity, so I am picking up a project to get some experience. The goal of the project is to get myself familiar with elastic services, how to set up the system so that log data is collected and forwarded to elasticsearch. Then I could analyze logs, create alerts and tickets.

I really did not expect that thing would get so complicated just because GCP vms don't manage their own external ip.

Besides, I am using GCP free credit, so currently it doesn't cost anything, probably for the next 2 months.