How to collect containerized elasticsearch logs with filebeat

Hey I am setting up an observaiblity use case to test it with docker, and I want to collect Elasticsearch logs (gc, audit, etc.) using Filebeat.

I have Elasticsearch running in a docker container, and I have filebeat running in another container, what configuration I need to collect logs ?

From Collecting Elasticsearch log data with Filebeat it says that I have to install filebeat in the same host or VM where Elasticsearch is running, but I am in a docker context should I build my own Dockerfile that has Elasticsearch and filebeat running in the same container? Can't find any related information from the official documentation, found some webinars but they don't cover the steps.

One more thing to add, the following lines in filebeat.yml:

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

retrieve what's printing out in the console when I run docker-compose up, and in Logs apm I have this:

How can I replace unknown with a proper name like docker-logging ?

Thanks!

Hi @marone !

You do not need to use custom Dockerfile, running filebeat as a separate container should be enough.

I have Elasticsearch running in a docker container, and I have filebeat running in another container, what configuration I need to collect logs ?

did you check this doc - Run Filebeat on Docker | Filebeat Reference [8.11] | Elastic ? It seems to be quite close to what you are trying to achieve. Did you add add needed labels to the elasticsearch container to be able to use hints based autodiscover - Hints based autodiscover | Filebeat Reference [8.11] | Elastic ?

thank you @Tetiana_Kravchenko for answering back, to be honest I already saw the label with autodiscover but didn't understand how to make it working, here is what I did based on the doc:

  • I added labels for Elasticsearch service in docker-compose.yml:
# ...
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    deploy:
      labels:
        co.elastic.logs/module: elasticsearch
        co.elastic.logs/fileset.stdout: access
        co.elastic.logs/fileset.stderr: error
    environment:
    - bootstrap.memory_lock=true
    - cluster.name=docker-cluster
    - cluster.routing.allocation.disk.threshold_enabled=false
    - discovery.type=single-node
    - ES_JAVA_OPTS=-XX:UseAVX=2 -Xms1g -Xmx1g
    ulimits:
      memlock:
        hard: -1
        soft: -1
    volumes:
    - esdata:/usr/share/elasticsearch/data
    ports:
    - 9200:9200
    networks:
    - elastic
    healthcheck:
      interval: 20s
      retries: 10
      test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"'

  filebeat:
      container_name: filebeat
      hostname: "metricbeat"
      image: docker.elastic.co/beats/filebeat:7.15.2
      user: root
      volumes:
        - /var/lib/docker/containers:/var/lib/docker/containers:ro
        - /var/run/docker.sock:/var/run/docker.sock:ro
        - ./config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      command: ["--strict.perms=false", "-system.hostfs=/hostfs"]
      networks:
        - elastic
      depends_on:
        - elasticsearch
        - kibana
      restart: always

# ...

and in filebeat.yml:

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false


filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      hints.default_config:
        type: container
        paths:
          - /var/log/containers/*-${data.container.id}.log  # CRI path

processors:
- add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

setup.dashboards.enabled: true

setup.kibana:
  host: kibana:5601

in APM UI I didn't get any logs :confused:

what's wrong with the conf please?

EDIT

Forgot to mention that I am using Docker desktop on windows 10, and I activated logging for filebeat and have the following error: ERROR metrics/metrics.go:297 cgroups data collection disabled: error finding subsystems: cgroups not found or unsupported by os

Hi @marone !

Sorry for the late reply and thank you for the detailed explanation!

Did you check if logs are actually ingested? You can check Discover in Kibana, you should also change the index pattern to the one where logs are ingested in (for example filebeat-*)

From the first look: I think you are using wrong hints.default_config.paths - /var/log/containers/*-${data.container.id}.log is mainly used for kubernetes environment, could you try /var/lib/docker/containers/*/*.log ?

Thank you a lot @Tetiana_Kravchenko you saved me a lot of time, I did the changes you wrote above and here is the filebeat.yml for people who will see this discussion in future:

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      hints.default_config:
        type: container
        paths:
          - /var/lib/docker/containers/*/*.log  # CRI path

# processors:
# - add_cloud_metadata: ~

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

logging.level: error
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0640

setup.dashboards.enabled: true

setup.kibana:
  host: kibana:5601

But one more last question, in Autodiscover for docker the hints.default_config.paths for docker is the same as Kubernetes :grinning_face_with_smiling_eyes: it's a typo error I guess, isn't it? Confirm it to me please so I can do a pull request to fix the doc. I guess it is!

I think it is a typo, for docker should be used /var/lib/docker/containers/*/*.log. And if I am not mistaken - /var/lib/docker/containers/*/*.log is a default value, so

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

might work without default config defined.

One more thing to mention, even with the new configuration, I have logs but they are not recognized as Elasticsearch logs, still I have 'unkown' (see image below). How can I add information, like for ex: instead of unkown I have elasticsearch-gc logs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.