Is metricbeat preventing a container from being deleted? Some locking issue?

I have the elastic stack (elasticsearch,logstash,kibana) + filebeat + metricbeat running on a 14.04.5 LTS (GNU/Linux 3.13.0-115-generic x86_64) host.

All elastic software is running in their own Docker container. So I have 5 containers for the above mentioned pieces.
When trying to update one of the containers (using docker-compose) I get a complaint that the internal container directory can't be removed.

For example I try to update kibana which means removing the kibana container and then recreate it:

docker-compose up -d kibana
elk_elasticsearch_1 is up-to-date
Recreating elk_kibana_1

ERROR: for kibana  Unable to remove filesystem for 
77e6efe9cf26c25af63037be3f77013e49a529e4ae6fe93ef75b1b246124c546: remove /var/lib/docker/containers/77e6efe9cf26c25af63037be3f77013e49a529e4ae6fe93ef75b1b246124c546/shm: device or resource busy
ERROR: Encountered errors while bringing up the project.

The following command will tell me that the container id is associated with the container for kibana
docker ps -a | grep 77e6e

After I first suspected filebeat to be responsible for locking the container directory (as it mounts the Docker host directory /var/lib/docker/containers into the filebeat container). But stopping filebeat didn't resolve the issue.

So, next I stopped metricbeat to experience that this would allow the container to be deleted:

docker-compose stop metricbeat
Stopping elk_metricbeat_1 ... done

docker-compose rm kibana      
Going to remove elk_kibana_1
Are you sure? [yN] y
Removing elk_kibana_1 ... done

Here is my metricbeat configuration:

#==========================  Modules configuration ============================
metricbeat.modules:

#------------------------------- Docker Module -------------------------------
- module: docker
  metricsets: ["container", "cpu", "diskio", "healthcheck", "info", "memory", "network"]
  hosts: ["unix:///var/run/docker.sock"]
  enabled: true
  period: 10s

#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:
  # Boolean flag to enable or disable the output module.
  enabled: true

  hosts: ["elasticsearch:9200"]

  username: "${ELASTICSEARCH_USERNAME:elastic}"
  password: "${ELASTICSEARCH_PASSWORD:changeme}"

The following is the docker-compose configuration:

version: "2.1"

  services:
    metricbeat:
      build: ./metricbeat
      group_add: ['root', 'adm']
      user: root
      volumes:
        - /proc:/hostfs/proc:ro
        - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
        - /:/hostfs:ro
        - /var/run/docker.sock:/var/run/docker.sock:ro
      command: metricbeat -e -system.hostfs=/hostfs
      depends_on: {elasticsearch: {condition: service_healthy}}
      stop_grace_period: 1m0s
      restart: always

I have also tried that on my macOS (Sierra) with Docker for Mac Version 17.06.0-ce-mac18 (18433). Here the issue does not show up. So it might something related to the OS platform - just guessing?

uhm, this is a weird behavior, I cannot think of anything that could cause that in metricbeat. @andrewkroh does this ring a bell to you?

@MartinAhrer do you reproduce this constantly? It would be useful to dump lsof /var/lib/docker/containers/<container id>/shm while this is happening.

I have tested that on another system with Red Hat 4.8.5-1, kernel 3.10.0-514.el7.x86_64. The effect here is reproducible but here the container can only be removed after stopping filebeat + metricbeat.

So, now is suspect that is a side effect not directly related to the way file/metricbeat is implemented. lsof on the Ubuntu 14.x system is not reporting open files.

I have found https://github.com/moby/moby/issues/22260 and https://github.com/moby/moby/issues/17902 which describe similar problems.

I think we can close this discussion. I'm fairly confident that that's a OS/kernel/Docker core related issue.

1 Like

Thank you for reporting anyway! you never know :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.