Kibana redirect to page: Select your space after a day, all data is reset. (ver: 7.8.0)

I have built kibana with elasticsearch and filebeat, it's running well with filebeat default index, own dashboard.

After 1 day, it redirect to page: Select your space, no spaces match search criteria.
This can be fixed after I restart kibana and elasticsearch but all data has gone.
kibana error

I do research and people said .kibana has been deleted but I checked and it's still there in the server. After restarting service, it created index: .kibana_2 and .kibana_3 to work.

ILM: "max_age" : "30d" should not make effect to this issue.

[.kibana_task_manager_1] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]
[.kibana_2] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]
[.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]

My full log (it's cleared everyday)

[2020-08-24T01:30:00,006][INFO ][o.e.x.s.SnapshotRetentionTask] [kibana] starting SLM retention snapshot cleanup task
[2020-08-24T01:30:00,009][INFO ][o.e.x.s.SnapshotRetentionTask] [kibana] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2020-08-24T02:10:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [kibana] triggering scheduled [ML] maintenance tasks
[2020-08-24T02:10:00,001][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [kibana] Deleting expired data
[2020-08-24T02:10:00,001][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [kibana] Completed deletion of expired ML data
[2020-08-24T02:10:00,002][INFO ][o.e.x.m.MlDailyMaintenanceService] [kibana] Successfully completed [ML] maintenance tasks
[2020-08-24T02:53:35,221][INFO ][o.e.c.m.MetadataCreateIndexService] [kibana] [.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []
[2020-08-24T02:53:35,427][INFO ][o.e.c.m.MetadataMappingService] [kibana] [.kibana/_NH4oEceQlWNy3hPVv3SDg] create_mapping [_doc]
[2020-08-24T02:53:35,550][INFO ][o.e.c.r.a.AllocationService] [kibana] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana][0]]]).
[2020-08-24T02:53:47,101][INFO ][o.e.c.m.MetadataMappingService] [kibana] [.kibana/_NH4oEceQlWNy3hPVv3SDg] update_mapping [_doc]
[2020-08-24T02:53:47,160][INFO ][o.e.c.m.MetadataMappingService] [kibana] [.kibana/_NH4oEceQlWNy3hPVv3SDg] update_mapping [_doc]
[2020-08-24T02:54:45,929][INFO ][o.e.c.m.MetadataMappingService] [kibana] [.kibana/_NH4oEceQlWNy3hPVv3SDg] update_mapping [_doc]

Do you have ILM applied to the .kibana index/alias?

Thanks Warkolm for taking a look,
I leave ILM as default, here's output of: GET _ilm/policy GET _ilm/policy/

{
  "ilm-history-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-07T10:51:48.002Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        },
        "delete" : {
          "min_age" : "90d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },
  "watch-history-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-07T10:51:47.851Z",
    "policy" : {
      "phases" : {
        "delete" : {
          "min_age" : "7d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },
  "kibana-event-log-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-07T11:37:16.312Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        },
        "delete" : {
          "min_age" : "90d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },
  "ml-size-based-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-07T10:51:47.933Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb"
            }
          }
        }
      }
    }
  },
  "filebeat" : {
    "version" : 1,
    "modified_date" : "2020-08-07T10:56:27.859Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        }
      }
    }
  },
  "slm-history-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-07T10:51:48.047Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        },
        "delete" : {
          "min_age" : "90d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  }
}

Another thing, the index list has a lot of ***-meow besides of filebeat index, may I know are they default of kibana or someone has attacked it.

Ahh, that's your issue.

Please see Some indexes have been deleted, now I see indexes called meow?

I'm going to rebuild the kibana and will update you the result Warkolm, thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.