I am adding fleet managed agents, but they are not sending data due to incorrect Elastic Output Host. I cannot change this setting since "This output is managed outside of Fleet".
I tried changing it outside of fleet by editing kibana.yml 'Elasticsearch.hosts' etc but this results in Kibana UI stating "its not ready".
I tried changing network.host in Elasticsearch.yml but still no change to the fleet Output Host setting. Elastic-Agent is still trying to post data to the incorrect (private) IP of the Elasticsearch/Kibana machine i.e (from remote agent):
Failed to connect to backoff(Elasticsearch(https://172.23.80.1:9200)): Get "https://172.23.80.1:9200": dial tcp 172.23.80.1:9200: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.","service.name":"metricbeat","ecs.version":"1.6.0"}
The IP is a private network IP, and it should be listening on the other interface's IP 192.168.x.x.
Nice, thanks for the info. I've set xpack.fleet.outputs and after restart of Kibana i can see it showing under Output host. I can also see the updated outputs.defaults.hosts setting in state.yml on the remote agent in question, so that's all good.
Now that I have changed the Output host uri, I'll need to create a new certificate on the Kibana instance as the agent is putting this in its log:
{"log.level":"error","@timestamp":"2022-05-20T11:34:23.006Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(elasticsearch(https://dev.home.local:9200)): Get \"https://dev.home.local:9200\": x509: certificate is valid for localhost, DEV, not dev.home.local","service.name":"metricbeat","ecs.version":"1.6.0"}
I initially set up with generated certificates rather than my own certs. I guess i should pull my finger out and generate my own certs
Incidentally, where can i find the xpack.fleet.outputs in the docs? I'm sure i searched everywhere
I'm not sure I found that page while I was searching. I found Kibana and Fleet but didn't realise the xpack.fleet settings. Thanks!
So they are Preconfiguration Settings. Does that mean if I had of set xpack.fleet.outputs before running Kibana for the first time it would have picked up the hostname from there and used it as the default agent output, and created the certificates with the appropriate hostname?
I don't believe we will use the hostname for generating certificates, as that step is done by Elasticsearch. The flow today is:
Startup a new ES node, certificates are generated
Paste the enrollment token into Kibana's web UI or init script
Kibana will auto-configure itself and Fleet for the provided enrollment token / certs generated by ES
Changing xpack.fleet.outputs settings after initial set up will update Fleet settings, but it will not regenerate the certificates created by ES. To do this, you need to revisit the ES documentation for generating certificates for your hostname and then update the xpack.fleet settings (or use the web UI).
I initially changed both elasticsearch.hosts and updated the xpack.fleet.outputs string in kibana.yml but that just resulted in "Kibana server is not ready yet.". Updating just the xpack.fleet.outputs string did the trick, although the remote agent (I had only 1 remote) reported certificate issues and stopped reporting. Following the steps above and creating a new cert should sort that.
However, I hadn't set up windows service yet, and rebooted the VM (after some RAM and core increase), and now it wont start with Kibana reporting its not ready. ES is logging this java error:
[2022-05-21T23:14:21,210][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [DEV] Authentication using apikey failed - apikey authentication for id V95F4YABbc8FFwQww42b encountered a failure
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.security][V95F4YABbc8FFwQww42b]: routing [null]]
And again then again for a 2nd apikey authentication id right before updating the geoip database. The last log line says "current.health changed from Red to Yellow reason: shards started". This happened after also installing Docker to run Linux images - but I don't see there being a correlation. Could this be the warning upgrade from ES thinking its now in production mode after changing the host?
Incidentally, Kibana takes just over 5 minutes to show some log output after starting the node server, I'd say its because ES is now logging those errors.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.