How to resolve metricbeat docker container internal errors?

Hello,

I am new to this forum but I am glad it exists.

I recently started using the ELK stack and I am still new to it. I am currently trying to setup a docker-compose file with my microservices plus elasticsearch, kibana and metricbeat. Eventually I will add logstash and filebeat but for now I want to be able to first setup metricbeat successfully.

So far I have setup metricbeat and it is able to pick some metrics about my containers and send them to elasticsearch. The problem is that I notice multiple errors in the metricbeat docker container and I tried to look for a solution but I could not find anything.

So first, here is my docker-compose file:

version: '3.7'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
    container_name: elasticsearch
    ports:
      - 9200:9200
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    networks:
      - elk-stack

  kibana:
    image: docker.elastic.co/kibana/kibana:7.16.3
    container_name: kibana
    environment:
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200
    ports:
      - 5601:5601
    networks:
      - elk-stack
      
  metricbeat:
    image: docker.elastic.co/beats/metricbeat:7.16.3
    container_name: metricbeat
    user: root
    volumes:
      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - "/proc:/hostfs/proc:ro"
      - "/:/hostfs:ro"
      - ./metricbeat.yml:/usr/share/metricbeat/metricbeat.yml
    command: metricbeat -e -system.hostfs=/hostfs -strict.perms=false
    networks:
      - elk-stack

  acs:
    image: image
    container_name: acs
    labels:
    - "co.elastic.metrics/module=acs"
    - "co.elastic.metrics/beat=metricbeat"
    ports:
      - "5006:5006"
    environment:
      ACS_PORT: 5006
    networks:
      - elk-stack

networks:
  elk-stack:
    driver: bridge

Here is my metricbeat.yml configuration:

metricbeat.config.modules:
  # Mounted `metricbeat-kubernetes-modules` configmap:
  path: ${path.config}/modules.d/*.yml
  # Reload module configs as they change:
  reload.enabled: false

metricbeat.modules:
- module: system
  metricsets:
    - cpu             # CPU usage
    - load            # CPU load averages
    - memory          # Memory usage
    - network         # Network IO
    - process         # Per process metrics
    - uptime          # System Uptime
    - socket_summary  # Socket summary
    - core           # Per CPU core usage
    - diskio         # Disk IO
    - filesystem     # File system usage for each mountpoint
  enabled: true
  period: 10s
  processes: ['.*']
  cpu.metrics:  ["percentages","normalized_percentages"]  # The other available option is ticks.
  core.metrics: ["percentages"]

- module: docker
  metricsets:
    - "container"
    - "cpu"
    - "diskio"
    - "event"
    - "healthcheck"
    - "info"
    - "image"
    - "memory"
    - "network"
    - "network_summary"
  hosts: ["unix:///var/run/docker.sock"]
  period: 10s
  enabled: true

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

monitoring.enabled: false


output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  index: "metricb-%{+YYYY.MM.dd}"

  # username: "elastic"
  # password: "changeme"
setup.template.name: metricb
setup.template.pattern: metricb-*

Here is the errors I receive in the metricbeat docker container:

2023-03-14 17:03:17 2023-03-14T16:03:17.300Z    ERROR   module/wrapper.go:266   Error fetching data for metricset docker.network_summary: error fetching namespace for PID 12612: error reading network namespace link: readlink /proc/12612/ns/net: no such file or directory

2023-03-14 17:03:06 2023-03-14T16:03:06.849Z    ERROR   metrics/metrics.go:304  error determining cgroups version: error reading /proc/12928/cgroup: open /proc/12928/cgroup: no such file or directory

Like I said, the metrics are being pushed to the elasticsearch local instance but those errors are being shown nevertheless.

Thank you in advance.