7.5.1 Filebeat container still creating "Standalone Cluster"

I am automating deployment of a ELK stack on Docker, and having issues getting the Elasticsearch logs to appear in the Kibana monitoring.

I was having an issue that using "monitoring." rules would create a "Standalone Cluster" in the Monitoring UI screen. Following suggestions on Filebeat creates a "Standalone Cluster" in Kibana Monitoring, I changed my config to use "xpack.monitoring.", and now get the Filebeat beat to appear in the UI, but it is not sending logs.

I have two questions:

  1. The pull request notes that the "monitoring.*" formatting should have been fixed in 7.5.0, but we are still needing to use "xpack.monitoring". Is this still expected?

  2. In addition to configuring the autodiscover in the config (see below), and adding the "co.elastic.logs/enabled" label to my containers, is there additional config required to get the Elasticsearch logs to appear in the Monitoring UI?


    filebeat.config:
    modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

    filebeat.autodiscover:
    providers:
    - type: docker
    hints.enabled: true
    hints.default_config:
    type: container
    paths:
    - /var/lib/docker/containers//.log

    processors:

    • add_docker_metadata: ~

    output.elasticsearch:
    hosts: [ "http://elasticsearch_1:9200" ]
    username: beats_system
    password: "{{ vault_es_password_beats_system }}"

    setup.kibana:

    host: [ "http://kibana:5601" ]

    username: elastic

    password: "{{ vault_es_password_elastic }}"

    xpack.monitoring:
    enabled: true
    elasticsearch:

You can't use both. xpack.monitoring.* settings are used when sending monitoring data through the production cluster which forwards it along to the monitoring cluster. In this setup, the production ES cluster needs to have configured monitoring exporters to know where to send the monitoring data. The production cluster does a little more than just forward the data - it actually appends some data, including its own cluster_uuid. This enables the Stack Monitoring UI to know which cluster to associate this monitoring data with.

monitoring.* settings are used when sending monitoring data to the monitoring cluster directly. However, because it's not routing through the production cluster, it does not know which cluster to associate with this monitoring data. There are two ways in which the beat can know which cluster_uuid to use:

  1. It uses the one from the monitoring.cluster_uuid
  2. It uses the one from the configured output

If monitoring.cluster_uuid is specified, it doesn't matter which output is configured - it will always use monitoring.cluster_uuid.

My best guess from your config is it looks like you are using unstructured logs:

You need to use the .json logs for the logs to show up in the Stack Monitoring UI. We actually just merged in code that will tell you about this problem -> [Monitoring] Logs UI in stack monitoring doesn't handle unstructured logs · Issue #53298 · elastic/kibana · GitHub

Hi Chris,

Thank you for the clarification on xpack.monitoring vs monitoring.

This specific filebeat instance is just meant to capture my ELK stack's own docker container logs on a single ELK container host. This host hosts a Elasticsearch x3 cluster, Logstash x1, Kibana x1, and Filebeat x1.

The only JSON files I see in the traditional docker log location '/var/lib/docker/containers/' are 'config.v2.json' and hostconfig.json for each container.

Do the official images not log (using the default docker-json driver) to this location on purpose? Running 'docker inspect ', I see that no LogPath is set..

Would I be able to use the paths.logs to set a path to a log dir?

I tried to grab the logs from Stdout/Error using something like

filebeat.inputs
 - type: container
   stream: all

But I am unsure what the path should be.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.