Importing dashboards on Elasicsearch

Hello everyone,

i'm just searching for a bit of help regarding on how to import my dashboards through filebeat into ELK running with docker in my computer. When using normal ELk (without docker), I imported my dashboards with the following command in the terminal:

./filebeat -c filbeat_dashboards.yml

This is how it looks the yml file:
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the
# website. ./h_dashboard/

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used. "hmm3"

Now, this is how it looks my docker-compose file :
version: "2.1"
  # The environment variable "ELASTIC_VERSION" is used throughout this file to
  # specify the version of the images to run. The default is set in the
  # '.env' file in this folder. It can be overridden with any normal
  # technique for setting environment variables, for example:
  #   ELASTIC_VERSION=5.5.1 docker-compose up
  # Additionally, the user can control:
  #   * the total memory assigned to the ES container through the variable ES_MEM_LIMIT e.g. ES_MEM_LIMIT=2g
  #   * the memory assigned to the ES JVM through the variable ES_JVM_HEAP e.g. ES_JVM_SIZE=1024m
  #   * the password used for the elastic, logstash_system and kibana accounts through the variable ES_PASSWORD
  #   * the mysql root password through the var MYSQL_ROOT_PASSWORD
  #   * the default index pattern used in kibana via the var DEFAULT_INDEX_PATTERN
  #   * the ES heap size through tt
  # REF:
      context: config/elasticsearch/
    container_name: elasticsearch
    hostname: elasticsearch
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms${ES_JVM_HEAP} -Xmx${ES_JVM_HEAP}"
    mem_limit: ${ES_MEM_LIMIT}
        soft: -1
        hard: -1
      - ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/elasticsearch.yml
      - esdata:/usr/share/elasticsearch/data
    #Port 9200 is available on the host
    ports: ['9200:9200']
    #Healthcheck to confirm availability of ES. Other containers wait on this.
      test: ["CMD", "curl","-s" ,"-f", "-u", "elastic:${ES_PASSWORD}", "http://localhost:9200/_cat/health"]
    #Internal network for the containers
    networks: ['stack']
      context: config/kibana/
    container_name: kibana
    hostname: kibana
      - ./config/kibana/kibana.yml:/usr/share/kibana/kibana.yml
    #Port 5601 accessible on the host
    ports: ['5601:5601']
    networks: ['stack']
    #We don't start Kibana until the ES instance is ready
    depends_on: ['elasticsearch']
      test: ["CMD", "curl", "-s", "-f", "http://localhost:5601/login"]
      retries: 6
  #Filebeat container
      context: config/beats/filebeat
    container_name: filebeat
    hostname: filebeat
    user: root
      #Mount the filebeast configuration so users can make edit
      - ./config/beats/filebeat/dashboards/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      #Mount the prospectors directory. Users can in turn add propspectors to this directory and they will be dynamically loaded
      #- ./config/beats/filebeat/prospectors.d/:/usr/share/filebeat/prospectors.d/
      #Mount the test volume
      #- ./.kitchen/logs/:/usr/share/filebeat/logs/
      #Named volume fsdata. This is used to persist the registry file between restarts, so to avoid data duplication
      - fbdata:/usr/share/filebeat/data/
    networks: ['stack']
    command: filebeat -e -strict.perms=false
    restart: on-failure
    depends_on: ['elasticsearch','kibana']

  #Configure Stack container. This short lived container configures the stack once Elasticsearch is available.
  #More specifically, using a script it sets passwords and sets a default index pattern.
    container_name: configure_stack
    volumes: ['./init/']
    command: ['/bin/bash', '-c', 'cat /usr/local/bin/ | tr -d "\r" | bash']
    networks: ['stack']
    depends_on: ['elasticsearch','kibana']
  #Es data
    driver: local
  #Filebeat data i.e. registry file
    driver: local
networks: {stack: {}}

Well, I would like to do something similar when using container within ELK docker.

Hope somebody could help me,

Hey @nb03briceno,

Assuming that you want to run everything with docker compose, I see you have a short-living container to configure the stack. You could use a similar pattern to run filebeat setup.

Something like this:

    container_name: install_dashboards
      - './init/'
      - './filebeat_dashboards.yml:/usr/share/filebeat/filebeat.yml'
    command: ['/bin/bash', '-c', 'cat /usr/local/bin/ | tr -d "\r" | bash']
    networks: ['stack']
    depends_on: ['elasticsearch','kibana']

The script should check for the availability of kibana and elasticsearch, and then run filebeat setup --dashboards -c /usr/share/filebeat/filebeat.yml.