I've installed ECK on a self-hosted cluster. I cannot get the get data from that agent. Below is the command I am using in my LCX Debian container. I don't know if I'm using the right fingerprint I'm losing my freaking mind here. I've really tried to understand but I'm not obviously grocking what is happening. Obviously I didn't generate any certificates because I use the quick start essentially. I spent hours and hours on this I don't even know if I'm asking the right question anymore. THANK YOU to anyone who reads this.
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-fleet-quickstart.html
./elastic-agent install --url=https://192.168.0.177:8220 --enrollment-token=QVg4M2xKQUJKTWxiMkJfcjJPTmY6Y1pvUWgwYWtUcHUxa2VzbjRSeVRzQQ== --insecure --fleet-server-es-ca-trusted-fingerprint=AAxxxxxxxxxxxxxxxxxxxxxxxxxxxxx6F
{"@timestamp":"2024-07-09T05:33:35.289Z", "log.level": "WARN", "message":"caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/10.244.2.53:9200, remoteAddress=/192.168.0.76:57042}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[elasticsearch-es-default-0][transport_worker][T#1]","log.logger":"org.elasticsearch.http.AbstractHttpServerTransport","elasticsearch.cluster.uuid":"edjV8SnJRIiZHrLlqR6yhw","elasticsearch.node.id":"a3gQbxOUTkiITDbrFiGv5w","elasticsearch.node.name":"elasticsearch-es-default-0","elasticsearch.cluster.name":"elasticsearch","error.type":"io.netty.handler.codec.DecoderException","error.message":"javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate","error.stack_trace":"io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate\n\tat io.netty.codec@4.1.107.Final/io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499)\n\tat io.netty.codec@4.1.107.Final/io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652)\n\tat io.netty.transport@4.1.107.Final/io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)\n\tat io.netty.common@4.1.107.Final/io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\tat io.netty.common@4.1.107.Final/io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat java.base/java.lang.Thread.run(Thread.java:1570)\nCaused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate\n\tat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:130)\n\tat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)\n\tat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:365)\n\tat java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:287)\n\tat java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:204)\n\tat java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)\n\tat java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736)\n\tat java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691)\n\tat java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506)\n\tat java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482)\n\tat java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679)\n\tat io.netty.handler@4.1.107.Final/io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:310)\n\tat io.netty.handler@4.1.107.Final/io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1445)\n\tat io.netty.handler@4.1.107.Final/io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1338)\n\tat io.netty.handler@4.1.107.Final/io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1387)\n\tat io.netty.codec@4.1.107.Final/io.netty.handler
kubectl get all -n elastic-system
NAME READY STATUS RESTARTS AGE
pod/dnsutils 1/1 Running 0 3d23h
pod/elastic-agent-agent-f826h 1/1 Running 0 9h
pod/elastic-agent-agent-gnzn9 1/1 Running 0 9h
pod/elastic-agent-agent-qkjcr 1/1 Running 0 9h
pod/elastic-operator-0 1/1 Running 2 (5d22h ago) 16d
pod/elasticsearch-es-default-0 1/1 Running 0 9h
pod/elasticsearch-es-default-1 1/1 Running 0 9h
pod/elasticsearch-es-default-2 1/1 Running 0 9h
pod/fleet-server-agent-65f89468dc-t6p8b 1/1 Running 0 8h
pod/kibana-kb-5496499b58-4chtw 1/1 Running 0 36m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elastic-webhook-server ClusterIP 10.101.125.225 <none> 443/TCP 16d
service/elasticsearch-es-default ClusterIP None <none> 9200/TCP 6d22h
service/elasticsearch-es-http LoadBalancer 10.111.75.161 192.168.0.178 9200:30998/TCP 11h
service/elasticsearch-es-internal-http ClusterIP 10.109.220.93 <none> 9200/TCP 6d22h
service/elasticsearch-es-transport ClusterIP None <none> 9300/TCP 6d22h
service/fleet-server-agent-http LoadBalancer 10.97.154.32 192.168.0.177 8220:31194/TCP 22h
service/kibana-kb-http LoadBalancer 10.96.88.71 192.168.0.176 5601:30842/TCP 6d22h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/elastic-agent-agent 3 3 3 3 3 <none> 6d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/fleet-server-agent 1/1 1 1 6d22h
deployment.apps/kibana-kb 1/1 1 1 6d22h
NAME DESIRED CURRENT READY AGE
replicaset.apps/fleet-server-agent-5dbd7b7f8d 0 0 0 6d22h
replicaset.apps/fleet-server-agent-65f89468dc 1 1 1 8h
replicaset.apps/fleet-server-agent-75fcbb8c4c 0 0 0 3d23h
replicaset.apps/fleet-server-agent-86849cc5ff 0 0 0 22h
replicaset.apps/kibana-kb-5496499b58 1 1 1 36m
replicaset.apps/kibana-kb-5977cb9678 0 0 0 9h
replicaset.apps/kibana-kb-5f9dbb76b 0 0 0 6d21h
replicaset.apps/kibana-kb-778986d7dd 0 0 0 3d23h
replicaset.apps/kibana-kb-966f4cc79 0 0 0 6d22h
replicaset.apps/kibana-kb-c5b96c647 0 0 0 9h
replicaset.apps/kibana-kb-f778fb866 0 0 0 7h30m
NAME READY AGE
statefulset.apps/elastic-operator 1/1 16d
statefulset.apps/elasticsearch-es-default 3/3 6d22h
apiVersion: kibana.k8s.elastic.co/v1ts/eck$
kind: Kibana
metadata:
name: kibana
namespace: elastic-system
spec:
version: 8.14.2
count: 1
elasticsearchRef:
name: elasticsearch
http:
service:
spec:
type: LoadBalancer
config:
xpack.fleet.agents.elasticsearch.hosts: ["https://192.168.0.178:9200"]
xpack.fleet.agents.fleet_server.hosts: ["https://192.168.0.177:8220"]
xpack.fleet.packages:
- name: system
version: latest
- name: elastic_agent
version: latest
- name: fleet_server
version: latest
- name: kubernetes
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server on ECK policy
id: eck-fleet-server
# namespace: elastic-system
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- name: fleet_server-1
id: fleet_server-1
package:
name: fleet_server
- name: Elastic Agent on ECK policy
id: eck-agent
# namespace: elastic-system
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- name: system-1
id: system-1
package:
name: system