My Indices are getting deleted automatically

Dear Community,

I'm new to ELK. Need you assistance why my indices are getting deleted. if you could let me know what is the issue. I'll be highly obliged for that. Thank you soooo muchhhhh

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: US-App
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

MY ILM policy file:

{
    "policy": {
        "phases": {
            "hot": {
                "min_age": "0ms",
                "actions": {
                    "rollover": {
                        "max_age": "1d",
                        "max_size": "15gb"
                    }
                }
            },
            "warm": {
                "min_age": "30d",
                "actions": {}
            },
            "cold": {
                "min_age": "60d",
                "actions": {}
            },
            "delete": {
                "min_age": "90d",
                "actions": {
                    "delete": {}
                }
            }
        }
    }
}

Logs file:

[2021-10-03T01:30:00,000][INFO ][o.e.x.s.SnapshotRetentionTask] [node-1] starting SLM retention snapshot cleanup task
[2021-10-03T01:30:00,019][INFO ][o.e.x.s.SnapshotRetentionTask] [node-1] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2021-10-03T01:52:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] triggering scheduled [ML] maintenance tasks
[2021-10-03T01:52:00,001][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Deleting expired data
[2021-10-03T01:52:00,005][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [node-1] Successfully deleted [0] unused stats documents
[2021-10-03T01:52:00,005][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-1] Completed deletion of expired ML data
[2021-10-03T01:52:00,005][INFO ][o.e.x.m.MlDailyMaintenanceService] [node-1] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2021-10-03T02:43:26,407][INFO ][o.e.x.i.IndexLifecycleTransition] [node-1] moving index [.kibana-event-log-7.14.1-000001] from [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] to [{"phase":"hot","action":"rollover","name":"attempt-rollover"}] in policy [kibana-event-log-policy]
[2021-10-03T16:04:05,801][INFO ][o.e.i.g.GeoIpDownloader  ] [node-1] updated geoip database [GeoLite2-Country.mmdb]
[2021-10-03T16:04:05,932][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] database file changed [/tmp/elasticsearch-6707443377040494765/geoip-databases/7DyZs6-mSvGOFQGxL2pQmw/GeoLite2-Country.mmdb], reload database...
[2021-10-03T16:04:05,932][INFO ][o.e.i.g.DatabaseReaderLazyLoader] [node-1] evicted [0] entries from cache after reloading database [/tmp/elasticsearch-6707443377040494765/geoip-databases/7DyZs6-mSvGOFQGxL2pQmw/GeoLite2-Country.mmdb]
[2021-10-03T16:04:06,379][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] database file changed [/tmp/elasticsearch-6707443377040494765/geoip-databases/7DyZs6-mSvGOFQGxL2pQmw/GeoLite2-City.mmdb], reload database...
[2021-10-03T16:04:06,379][INFO ][o.e.i.g.DatabaseReaderLazyLoader] [node-1] evicted [0] entries from cache after reloading database [/tmp/elasticsearch-6707443377040494765/geoip-databases/7DyZs6-mSvGOFQGxL2pQmw/GeoLite2-City.mmdb]
[2021-10-03T16:23:26,409][INFO ][o.e.x.i.IndexLifecycleTransition] [node-1] moving index [us-prod-2021.10.02-000011] from [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] to [{"phase":"hot","action":"rollover","name":"attempt-rollover"}] in policy [usprod-ilm-policy]
[2021-10-03T16:23:26,534][INFO ][o.e.c.m.MetadataCreateIndexService] [node-1] [us-prod-2021.10.03-000012] creating index, cause [rollover_index], templates [us-prod], shards [1]/[1]
[2021-10-03T16:23:26,640][INFO ][o.e.x.i.IndexLifecycleTransition] [node-1] moving index [us-prod-2021.10.03-000012] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [usprod-ilm-policy]
[2021-10-03T16:23:26,684][INFO ][o.e.x.i.IndexLifecycleTransition] [node-1] moving index [us-prod-2021.10.02-000011] from [{"phase":"hot","action":"rollover","name":"attempt-rollover"}] to [{"phase":"hot","action":"rollover","name":"wait-for-active-shards"}] in policy [usprod-ilm-policy]
[2021-10-04T00:49:42,670][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [.kibana_task_manager_7.14.1_001/2jwyY2FzT624J6C90ep34g] deleting index
[2021-10-04T00:49:43,010][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [.apm-agent-configuration/pKpW1LGoSYOSEQF6fLWKgQ] deleting index
[2021-10-04T00:49:43,155][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [.kibana_7.14.1_001/CketnYHrTpe_AHxwT50NFw] deleting index
[2021-10-04T00:49:43,297][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [.tasks/7I4MdADSSNShVJfBMVMonA] deleting index
[2021-10-04T00:49:43,436][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.25-000002/CsVUY9SISuSw4sfqFV7_0g] deleting index
[2021-10-04T00:49:43,639][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.24-000001/vPc3HJQLTH6LCftGfwLHjA] deleting index
[2021-10-04T00:49:43,975][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [filebeat-7.13.4-2021.09.30-000001/hKCehBHXQyCENTqPoIYxsA] deleting index
[2021-10-04T00:49:44,123][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.26-000003/qOEVT6e7QxKUDlljbqh19g] deleting index
[2021-10-04T00:49:44,314][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [.apm-custom-link/NJpvNJEnRg-xWj0Ygc8Y6w] deleting index
[2021-10-04T00:49:44,457][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.29-000007/PFuc2Hg3Q1iwducEJ_70Vw] deleting index
[2021-10-04T00:49:44,935][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.28-000005/dC72MROLSMincLc1Et7stw] deleting index
[2021-10-04T00:49:45,307][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.27-000004/5emakJUdRAu7nWdJrhT0fA] deleting index
[2021-10-04T00:49:47,643][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.30-000008/OXYwH9UUSHGEtv0sZ9zqCQ] deleting index
[2021-10-04T00:49:48,065][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [.kibana-event-log-7.14.1-000001/0Bv6m8izR3O6kj4uZwrgYQ] deleting index
[2021-10-04T00:49:48,205][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [us-prod-2021.09.30-000009/U8yz2_0LSpGnVe5l19BDfw] deleting index
[2021-10-04T00:49:48,652][INFO ][o.e.c.m.MetadataDeleteIndexService] [node-1] [.kibana-event-log-7.14.1-000002/v77tdw-8TLyJbiGtkTB1Ng] deleting index
[2021-10-04T00:49:48,804][INFO ][o.e.c.m.MetadataCreateIndexService] [node-1] [beat-awseb] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2021-10-04T00:49:48,911][INFO ][o.e.c.m.MetadataMappingService] [node-1] [beat-awseb/qGfk4ZoMSwOq-MsrdC37uQ] create_mapping [_doc]
[2021-10-04T00:49:48,947][INFO ][o.e.c.m.MetadataCreateIndexService] [node-1] [us-prod] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2021-10-04T00:49:49,059][INFO ][o.e.c.m.MetadataMappingService] [node-1] [us-prod/iJLQx2s5QDGdaRK8grVwLA] create_mapping [_doc]
[2021-10-04T00:49:49,704][INFO ][o.e.c.m.MetadataMappingService] [node-1] [us-prod/iJLQx2s5QDGdaRK8grVwLA] update_mapping [_doc]

Seems like someone mistakenly used Delete Index API with wildcard, im not sure how you can confirm this with audit logs. But it seems like that because all data deleted in same time

It looks like you have made your cluster available to external hosts without having enabled security. This is very bad as anyone can access your cluster and do anything with it, e.g. delete all data. You should look to secure your cluster immediately.

There have been reports of bots scanning the Internet for unsecured Elasticsearch clusters and hijacking or deleting data.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.