Hi, I am having trouble with having my fleet server communicate with elasticsearch cluster. My current stack has have 2 EC2 instance for Elasticsearch (es01, es02), 1 EC2 Instance for Kibana and 1 EC2 instance for Fleet Server
Here is what I did to set up my stack:
1: I Installed local version of elasticsearch on my Ec2 instances using this link: Install Elasticsearch from archive on Linux or MacOS | Elasticsearch Guide [8.6] | Elastic
2: I created “certs” folder in ~/elasticsearch-8.6.2/config in es01 EC2 instance
3: I created the instances.yml file which looks like this
instances:
- name: es01
dns:
- es01
- localhost
ip:
- 127.0.0.1
- ec2-private-ip-address
- name: es02
dns:
- es02
- localhost
ip:
- 127.0.0.1
- ec2-private-ip-address
- name: fleet-server
dns:
- fleet-server
- localhost
ip:
- 127.0.0.1
- ec2-private-ip-address
- ec2-public-ip-address
Step 4:
Then I used the command below to create the “ca” certificate
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip
unzip config/certs/ca.zip -d config/certs
Once I had the ca.crt and ca.key, I created certificate for my elasticsearch nodes and the fleet server using this commands
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
I now had these files: es01.crt, es01.key, es02.crt, es02.key
At this point, I downloaded the aws ec2 discovery plugin. I also created the IAM Role it suggested and attached the IAM Role to my elasticserach EC2 instances so they can be discovered using the plugin.
Here is my elasticsearch.yml file for es01
#elasticsearch.yml for es01
node.name: es01
cluster.name: docker-cluster
node.roles: [data, master]
cluster.initial_master_nodes: [es01, es02]
discovery.seed_providers: ec2
discovery.ec2.any_group: true
discovery.ec2.endpoint: ec2.ca-central-1.amazonaws.com
cloud.node.auto_attributes: true
bootstrap.memory_lock: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /home/ubuntu/elasticsearch-8.6.2/config/certs/es01/es01.key
xpack.security.http.ssl.certificate: /home/ubuntu/elasticsearch-8.6.2/config/certs/es01/es01.crt
xpack.security.http.ssl.certificate_authorities: [/home/ubuntu/elasticsearch-8.6.2/config/certs/ca/ca.crt]
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key: /home/ubuntu/elasticsearch-8.6.2/config/certs/es01/es01.key
xpack.security.transport.ssl.certificate: /home/ubuntu/elasticsearch-8.6.2/config/certs/es01/es01.crt
xpack.security.transport.ssl.certificate_authorities: [/home/ubuntu/elasticsearch-8.6.2/config/certs/ca/ca.crt]
xpack.security.transport.ssl.verification_mode: certificate
xpack.license.self_generated.type: basic
network.host: [_ec2:privateIp_, _local_]
My elasticsearch.yml file for es02 is similar.
Step 5.
Once I had generated the certificates for the fleet server, es01, and es02,
I copied over ca.crt, ca.key, es02.crt, es02.key to my es02 Ec2 instance
I also copied over ca.crt, ca.key, fleet-server.crt, fleet-server.crt to my fleet-server EC2 Instance.
I copied over ca.crt to my Kibana EC2 instance
Step 6.
I ran elasticsearch on both nodes. They formed a cluster. I used this command to see number of nodes in the cluster (it was 2)
curl -u elastic:elastic --cacert /home/ubuntu/elasticsearch-8.6.2/config/certs/ca/ca.crt -X GET https://localhost:9200/_cluster/health?pretty
I also set the password for kibana_system to “kibana” so Kibana could authenticate itself with elasticsearch
Step 7. I now started Kibana’s local version. Here is my kibana.yml file
elasticsearch.hosts: https://es01-private-ip:9200 #private IP address of an elasticsearch node
elasticsearch.username: kibana_system
elasticsearch.password: kibana
elasticsearch.ssl.certificateAuthorities: /home/ubuntu/certs/ca/ca.crt
server.host: 0.0.0.0
#APM Settings
xpack.fleet.agents.fleet_server.hosts: [https://fleet-server-privatei-ip:8220]
xpack.fleet.outputs:
- id: fleet-default-output
name: default
type: elasticsearch
hosts: [https://es01-private-ip:9200]
is_default: true
is_default_monitoring: true
xpack.fleet.packages:
- name: fleet_server
version: latest
- name: system
version: latest
- name: elastic_agent
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server Policy
id: fleet-server-policy
description: Static agent policy for Fleet Server
monitoring_enabled:
- logs
- metrics
package_policies:
- name: fleet_server-1
package:
name: fleet_server
- name: system-1
package:
name: system
- name: elastic_agent-1
package:
name: elastic_agent
- name: Agent Policy APM Server
id: agent-policy-apm-server
description: Static agent policy for the APM Server integration
monitoring_enabled:
- logs
- metrics
package_policies:
- name: system-1
package:
name: system
- name: elastic_agent-1
package:
name: elastic_agent
- name: apm-1
package:
name: apm
# See the APM package manifest for a list of possible inputs.
# https://github.com/elastic/apm-server/blob/v8.5.0/apmpackage/apm/manifest.yml#L41-L168
inputs:
- type: apm
vars:
- name: host
value: 0.0.0.0:8200
- name: url
value: http://localhost:8200
Kibana is able to connect to my elasticserach cluster and I can login using the username “elastic” and password for the “elastic” user
Step 8.
Enroll fleet agent with fleet-server-policy to create a fleet server. I used this command which was generated by the Kibana UI
curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.6.2-linux-x86_64.tar.gz
tar xzvf elastic-agent-8.6.2-linux-x86_64.tar.gz
cd elastic-agent-8.6.2-linux-x86_64
sudo ./elastic-agent install --url=https://fleet-server-privateIpAddress:8220 \
--fleet-server-es=https://es01-private-Ip:9200 \
--fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2NzkyMTEzNzcyODU6amx2WmFLOGhSYy1hbkkyS25wWnJidw \
--fleet-server-policy=fleet-server-policy \
--certificate-authorities=/home/ubuntu/certs/ca/ca.crt \
--fleet-server-es-ca=/home/ubuntu/certs/ca/ca.crt \
--fleet-server-cert=/home/ubuntu/certs/fleet-server/fleet-server.crt \
--fleet-server-cert-key=/home/ubuntu/certs/fleet-server/fleet-server.key
This is where I get errors from my elasticsearch cluster. The agent successfully enrolls. But I keep getting this error message from Elasticsearch. I believe it is the agent trying to talk to Elasticsearch but Elasticsearch is rejecting the request. I don’t have any other tool which is trying to send data to Elasticsearch
[2023-03-20T03:29:07,418][WARN ][o.e.h.AbstractHttpServerTransport] [es01] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/172.31.11.253:9200, remoteAddress=/172.31.7.98:52272}io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at io.netty.codec@4.1.84.Final/io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489)
at io.netty.codec@4.1.84.Final/io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
at io.netty.transport@4.1.84.Final/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.transport@4.1.84.Final/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.transport@4.1.84.Final/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
See logs for more details.
I am not quite sure what to do. I feel like I am trying random things without much result. I would be very grateful with help in this matter.