Another Kibana instance appears to be migrating the index - Docker environment

I have three Elasticsearch nodes running with docker-compose. When I start the additional kibana service, it gets stuck on:

Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.

I already tried deleting it, but the message won't change or Elasticsearch sends a timeout...

Failed to connect to 9200 port 80: Connection timed out

Does anyone know what's the issue here?

docker-compose file:

version: '3.4'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
    container_name: es01
    environment:
      #- discovery.type=single-node
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic

  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data02:/usr/share/elasticsearch/data
    networks:
      - elastic

  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data03:/usr/share/elasticsearch/data
    networks:
      - elastic

  kib01:
    image: docker.elastic.co/kibana/kibana:7.8.1
    container_name: kib01
    depends_on:
      - es01
      - es02
      - es03
    ports:
      - 5601:5601
    environment:
      ELASTICSEARCH_URL: http://es01:9200
      ELASTICSEARCH_HOSTS: http://es01:9200
    networks:
      - elastic

  client:
    image: appropriate/curl:latest
    depends_on:
      - es01
      - es02
      - es03
    networks:
      - elastic
    command: sh -c "curl es01:9200 && curl kib01:5601"
    restart: on-failure

  dash_app:
    build: .
    ports:
    - 0.0.0.0:8050:8050
    depends_on:
      - es01
      - es02
      - es03
      - kib01
    networks:
      - elastic

#mapping:
#  image: appropriate/curl:latest
#  depends_on:
#    - es01
#    - es02
#    - es03
#  networks:
#    - elastic
#  command:    "curl -v -XPUT 'es01:9200/urteile' -H 'Content-Type: application/json' -d '
#       {
#         'mappings': {
#           'properties': {
#             'date': {
#               'type': 'date'
#             }
#           }
#         }
#       }
#    '"

  #web:
   # build: .
   # ports:
    #  - 8000:8000
    #depends_on:
    #  - es01
    #  - es02
    #  - es03
    #networks:
    #  - elastic


volumes:
  data01:
    driver: local
  data02:
    driver: local
  data03:
    driver: local

networks:
  elastic:
    driver: bridge

Quick Update: the problem had to do with the indices not permitting writing/setting them to read-only due to storage problems. See my answer here: Elasticsearch Docker: flood stage disk watermark [95%] exceeded

After manually updating the settings, the indices kibana_1 and kibana_task_manager_1 can be deleted by using

curl -XDELETE 'http://localhost:9200/.kibana_1'  --header "content-type: application/JSON" -u elastic -p

After that it will work.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.