I have been using ELK in Docker for several years now, with zero issues. I am currently running a couple of entirely separate Dockerized ELK stacks, on entirely different servers running different environments, and am experiencing the same issue on both.
I'm using Kibana to access logs from various pieces of software, with each day having its own index. Generally it's useful to keep these indices for a month, then remove them. Unfortunately each morning when I log on the index from the previous day is gone, and only the index created on the current day is present. This applies to every single different index I have. Everything shows as healthy, there's plenty of hard drive space and free memory, but all past indices are gone without a trace.
I have looked into this extensively without any success. There was no index lifecycle set up at any point on either of the ELK stacks, so it would appear to be either something that happens by default in Kibana 7.4.0 or something that is being caused by the docker-compose file.
The docker-compose.yml file I'm using is as below.
version: '3.7'
services:
# Elasticsearch Docker Images: https://www.docker.elastic.co/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
restart: always
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
restart: always
volumes:
esdata1:
driver: local