I am automating deployment of a ELK stack on Docker, and having issues getting the Elasticsearch logs to appear in the Kibana monitoring.
I was having an issue that using "monitoring." rules would create a "Standalone Cluster" in the Monitoring UI screen. Following suggestions on Filebeat creates a "Standalone Cluster" in Kibana Monitoring, I changed my config to use "xpack.monitoring.", and now get the Filebeat beat to appear in the UI, but it is not sending logs.
I have two questions:
The pull request notes that the "monitoring.*" formatting should have been fixed in 7.5.0, but we are still needing to use "xpack.monitoring". Is this still expected?
In addition to configuring the autodiscover in the config (see below), and adding the "co.elastic.logs/enabled" label to my containers, is there additional config required to get the Elasticsearch logs to appear in the Monitoring UI?
You can't use both. xpack.monitoring.* settings are used when sending monitoring data through the production cluster which forwards it along to the monitoring cluster. In this setup, the production ES cluster needs to have configured monitoring exporters to know where to send the monitoring data. The production cluster does a little more than just forward the data - it actually appends some data, including its own cluster_uuid. This enables the Stack Monitoring UI to know which cluster to associate this monitoring data with.
monitoring.* settings are used when sending monitoring data to the monitoring cluster directly. However, because it's not routing through the production cluster, it does not know which cluster to associate with this monitoring data. There are two ways in which the beat can know which cluster_uuid to use:
It uses the one from the monitoring.cluster_uuid
It uses the one from the configured output
If monitoring.cluster_uuid is specified, it doesn't matter which output is configured - it will always use monitoring.cluster_uuid.
My best guess from your config is it looks like you are using unstructured logs:
Thank you for the clarification on xpack.monitoring vs monitoring.
This specific filebeat instance is just meant to capture my ELK stack's own docker container logs on a single ELK container host. This host hosts a Elasticsearch x3 cluster, Logstash x1, Kibana x1, and Filebeat x1.
The only JSON files I see in the traditional docker log location '/var/lib/docker/containers/' are 'config.v2.json' and hostconfig.json for each container.
Do the official images not log (using the default docker-json driver) to this location on purpose? Running 'docker inspect ', I see that no LogPath is set..
Would I be able to use the paths.logs to set a path to a log dir?
I tried to grab the logs from Stdout/Error using something like
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.