Lost indice without any warning

Hi there

I just notice that I lost my indice and this is a bit strange. Could you please help me with the log file below?

The command

curl -X GET "localhost:9200/_cat/indices?v"

only give me that

health status index   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   read_me nI0gVeKyT0ONgatgc2mt8Q   1   1          1            0      5.2kb          5.2kb

here are the log

[2022-10-28T01:30:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [host6-vm103] triggering scheduled [ML] maintenance tasks
[2022-10-28T01:30:00,018][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [host6-vm103] Deleting expired data
[2022-10-28T01:30:00,021][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [host6-vm103] Successfully deleted [0] unused stats documents
[2022-10-28T01:30:00,021][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [host6-vm103] Completed deletion of expired ML data
[2022-10-28T01:30:00,021][INFO ][o.e.x.m.MlDailyMaintenanceService] [host6-vm103] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2022-10-28T03:30:00,001][INFO ][o.e.x.s.SnapshotRetentionTask] [host6-vm103] starting SLM retention snapshot cleanup task
[2022-10-28T03:30:00,007][INFO ][o.e.x.s.SnapshotRetentionTask] [host6-vm103] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2022-10-28T04:19:15,796][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [5.7s/5709ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:19:15,793][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@7a06491, interval=5s}] took [21820ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:19:15,807][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [5.7s/5709063825ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:19:16,011][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [55.6s/55624ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:19:16,011][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [55.6s/55624057798ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:20:25,847][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [10.1s/10136ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:20:26,301][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [10.1s/10136054782ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:20:26,304][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [24296ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:20:26,509][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11s/11087ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:20:26,510][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11s/11087611740ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:21:14,023][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [10.7s/10744ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:21:14,035][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [10.7s/10744622301ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:21:14,037][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [18605ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:22:09,894][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [12.3s/12380ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:22:09,918][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [12.3s/12379344699ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:22:09,965][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [16388ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:22:10,140][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11.9s/11913ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:22:10,140][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11.9s/11913883764ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:23:13,806][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11.2s/11229ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:24:07,418][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@7a06491, interval=5s}] took [30932ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:24:06,637][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11.2s/11229466016ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:26:14,855][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [3m/183903ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:26:14,915][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [3m/183902280355ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:27:05,306][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [7.3s/7331ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:28:00,601][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [7.3s/7330668524ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:28:00,602][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@7a06491, interval=5s}] took [10411ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:28:00,928][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [55.7s/55777ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:28:00,928][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [55.7s/55777520388ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:28:52,257][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [7.4s/7421ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:28:52,309][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [16234ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:28:52,338][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [7.4s/7420589988ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:29:56,209][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@7a06491, interval=5s}] took [30050ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:31:13,830][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@7a06491, interval=5s}] took [23304ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:30:46,662][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [9.7s/9768ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:31:13,832][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [9.7s/9767650295ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:31:14,041][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [34.9s/34943ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:31:14,042][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [34.9s/34943640361ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:31:14,097][WARN ][o.e.m.f.FsHealthService  ] [host6-vm103] health check of [/var/lib/elasticsearch] took [34943ms] which is above the warn threshold of [5s]
[2022-10-28T04:32:12,736][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11.5s/11562ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:32:12,780][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [11.5s/11562453421ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:32:12,822][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [13558ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:33:15,125][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [12520ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:33:15,125][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [21.8s/21838ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:33:15,139][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [21.8s/21838245188ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:34:13,874][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [28.6s/28650ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:34:13,886][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [28.6s/28650113922ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:35:30,630][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [7.4s/7487ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:35:30,640][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [7.4s/7486827662ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:35:30,706][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [16398ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:35:30,841][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [31.8s/31887ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:35:30,901][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [31.8s/31886839469ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T04:36:17,263][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [16.9s/16968ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-10-28T04:36:17,355][WARN ][o.e.t.ThreadPool         ] [host6-vm103] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@60ae2c2e, interval=1s}] took [17168ms] which is above the warn threshold of [5000ms]
[2022-10-28T04:36:17,407][WARN ][o.e.t.ThreadPool         ] [host6-vm103] timer thread slept for [16.9s/16968401936ns] on relative clock which is above the warn threshold of [5000ms]
[2022-10-28T10:38:37,027][INFO ][o.e.n.Node               ] [host6-vm103] stopping ...
[2022-10-28T10:38:37,033][INFO ][o.e.r.s.FileSettingsService] [host6-vm103] shutting down watcher thread
[2022-10-28T10:38:37,033][INFO ][o.e.r.s.FileSettingsService] [host6-vm103] watcher service stopped
[2022-10-28T10:38:37,036][INFO ][o.e.x.w.WatcherService   ] [host6-vm103] stopping watch service, reason [shutdown initiated]
[2022-10-28T10:38:37,037][INFO ][o.e.x.w.WatcherLifeCycleService] [host6-vm103] watcher has stopped and shutdown
[2022-10-28T10:38:37,505][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [host6-vm103] [controller/1124] [Main.cc@176] ML controller exiting
[2022-10-28T10:38:37,509][INFO ][o.e.x.m.p.NativeController] [host6-vm103] Native controller process has stopped - no new native processes can be started
[2022-10-28T10:38:37,540][INFO ][o.e.n.Node               ] [host6-vm103] stopped
[2022-10-28T10:38:37,540][INFO ][o.e.n.Node               ] [host6-vm103] closing ...
[2022-10-28T10:38:37,549][INFO ][o.e.i.g.DatabaseReaderLazyLoader] [host6-vm103] evicted [0] entries from cache after reloading database [/tmp/elasticsearch-1615719037859139686/geoip-databases/UI0R0pBoS9uKzIFv06GM7w/GeoLite2-Country.mmdb]
[2022-10-28T10:38:37,549][INFO ][o.e.i.g.DatabaseReaderLazyLoader] [host6-vm103] evicted [0] entries from cache after reloading database [/tmp/elasticsearch-1615719037859139686/geoip-databases/UI0R0pBoS9uKzIFv06GM7w/GeoLite2-ASN.mmdb]
[2022-10-28T10:38:37,549][INFO ][o.e.i.g.DatabaseReaderLazyLoader] [host6-vm103] evicted [0] entries from cache after reloading database [/tmp/elasticsearch-1615719037859139686/geoip-databases/UI0R0pBoS9uKzIFv06GM7w/GeoLite2-City.mmdb]
[2022-10-28T10:38:37,555][INFO ][o.e.n.Node               ] [host6-vm103] closed
[2022-10-28T10:38:43,522][INFO ][o.e.n.Node               ] [host6-vm103] version[8.4.1], pid[5220], build[deb/2bd229c8e56650b42e40992322a76e7914258f0c/2022-08-26T12:11:43.232597118Z], OS[Linux/4.15.18-28-pve/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/18.0.2/18.0.2+9-61]
[2022-10-28T10:38:43,528][INFO ][o.e.n.Node               ] [host6-vm103] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-10-28T10:38:43,528][INFO ][o.e.n.Node               ] [host6-vm103] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-10058833919887562642, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms2g, -Xmx2g, -XX:MaxDirectMemorySize=1073741824, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=deb, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-10-28T10:38:45,241][INFO ][c.a.c.i.j.JacksonVersion ] [host6-vm103] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-10-28T10:38:46,285][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [aggs-matrix-stats]
[2022-10-28T10:38:46,286][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [analysis-common]
[2022-10-28T10:38:46,286][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [constant-keyword]
[2022-10-28T10:38:46,286][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [data-streams]
[2022-10-28T10:38:46,286][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [frozen-indices]
[2022-10-28T10:38:46,287][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [ingest-attachment]
[2022-10-28T10:38:46,287][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [ingest-common]
[2022-10-28T10:38:46,287][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [ingest-geoip]
[2022-10-28T10:38:46,287][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [ingest-user-agent]
[2022-10-28T10:38:46,287][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [kibana]
[2022-10-28T10:38:46,288][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [lang-expression]
[2022-10-28T10:38:46,288][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [lang-mustache]
[2022-10-28T10:38:46,288][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [lang-painless]
[2022-10-28T10:38:46,288][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [legacy-geo]
[2022-10-28T10:38:46,288][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [mapper-extras]
[2022-10-28T10:38:46,288][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [mapper-version]
[2022-10-28T10:38:46,289][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [old-lucene-versions]
[2022-10-28T10:38:46,289][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [parent-join]
[2022-10-28T10:38:46,289][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [percolator]
[2022-10-28T10:38:46,289][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [rank-eval]
[2022-10-28T10:38:46,289][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [reindex]
[2022-10-28T10:38:46,290][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [repositories-metering-api]
[2022-10-28T10:38:46,290][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [repository-azure]
[2022-10-28T10:38:46,290][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [repository-encrypted]
[2022-10-28T10:38:46,290][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [repository-gcs]
[2022-10-28T10:38:46,290][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [repository-s3]
[2022-10-28T10:38:46,290][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [repository-url]
[2022-10-28T10:38:46,291][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [runtime-fields-common]
[2022-10-28T10:38:46,291][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [search-business-rules]
[2022-10-28T10:38:46,291][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [searchable-snapshots]
[2022-10-28T10:38:46,291][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [snapshot-based-recoveries]
[2022-10-28T10:38:46,291][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [snapshot-repo-test-kit]
[2022-10-28T10:38:46,291][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [spatial]
[2022-10-28T10:38:46,292][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [systemd]
[2022-10-28T10:38:46,292][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [transform]
[2022-10-28T10:38:46,292][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [transport-netty4]
[2022-10-28T10:38:46,292][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [unsigned-long]
[2022-10-28T10:38:46,292][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [vector-tile]
[2022-10-28T10:38:46,292][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [wildcard]
[2022-10-28T10:38:46,293][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-aggregate-metric]
[2022-10-28T10:38:46,293][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-analytics]
[2022-10-28T10:38:46,293][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-async]
[2022-10-28T10:38:46,293][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-async-search]
[2022-10-28T10:38:46,293][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-autoscaling]
[2022-10-28T10:38:46,294][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-ccr]
[2022-10-28T10:38:46,294][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-core]
[2022-10-28T10:38:46,294][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-deprecation]
[2022-10-28T10:38:46,294][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-enrich]
[2022-10-28T10:38:46,294][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-eql]
[2022-10-28T10:38:46,294][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-fleet]
[2022-10-28T10:38:46,295][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-graph]
[2022-10-28T10:38:46,295][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-identity-provider]
[2022-10-28T10:38:46,295][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-ilm]
[2022-10-28T10:38:46,295][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-logstash]
[2022-10-28T10:38:46,295][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-ml]
[2022-10-28T10:38:46,295][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-monitoring]
[2022-10-28T10:38:46,295][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-ql]
[2022-10-28T10:38:46,296][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-rollup]
[2022-10-28T10:38:46,296][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-security]
[2022-10-28T10:38:46,296][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-shutdown]
[2022-10-28T10:38:46,296][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-sql]
[2022-10-28T10:38:46,296][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-stack]
[2022-10-28T10:38:46,296][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-text-structure]
[2022-10-28T10:38:46,296][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-voting-only-node]
[2022-10-28T10:38:46,297][INFO ][o.e.p.PluginsService     ] [host6-vm103] loaded module [x-pack-watcher]
[2022-10-28T10:38:46,297][INFO ][o.e.p.PluginsService     ] [host6-vm103] no plugins loaded
[2022-10-28T10:38:48,723][INFO ][o.e.e.NodeEnvironment    ] [host6-vm103] using [1] data paths, mounts [[/ (rpool/subvol-103-disk-0)]], net usable_space [76gb], net total_space [100gb], types [zfs]
[2022-10-28T10:38:48,724][INFO ][o.e.e.NodeEnvironment    ] [host6-vm103] heap size [2gb], compressed ordinary object pointers [true]
[2022-10-28T10:38:48,779][INFO ][o.e.n.Node               ] [host6-vm103] node name [host6-vm103], node ID [UI0R0pBoS9uKzIFv06GM7w], cluster name [elasticsearch], roles [data_cold, data, remote_cluster_client, master, data_warm, data_content, transform, data_hot, ml, data_frozen, ingest]
[2022-10-28T10:38:51,708][INFO ][o.e.x.s.Security         ] [host6-vm103] Security is disabled
[2022-10-28T10:38:51,757][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [host6-vm103] [controller/5251] [Main.cc@123] controller (64 bit): Version 8.4.1 (Build c0373714f3bc4b) Copyright (c) 2022 Elasticsearch BV
[2022-10-28T10:38:52,154][INFO ][o.e.t.n.NettyAllocator   ] [host6-vm103] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-10-28T10:38:52,172][INFO ][o.e.i.r.RecoverySettings ] [host6-vm103] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-10-28T10:38:52,203][INFO ][o.e.d.DiscoveryModule    ] [host6-vm103] using discovery type [multi-node] and seed hosts providers [settings]
[2022-10-28T10:38:53,082][INFO ][o.e.n.Node               ] [host6-vm103] initialized
[2022-10-28T10:38:53,083][INFO ][o.e.n.Node               ] [host6-vm103] starting ...
[2022-10-28T10:38:53,120][INFO ][o.e.x.s.c.f.PersistentCache] [host6-vm103] persistent cache index loaded
[2022-10-28T10:38:53,121][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [host6-vm103] deprecation component started
[2022-10-28T10:38:53,227][INFO ][o.e.t.TransportService   ] [host6-vm103] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2022-10-28T10:38:53,574][WARN ][o.e.b.BootstrapChecks    ] [host6-vm103] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2022-10-28T10:38:53,575][WARN ][o.e.c.c.ClusterBootstrapService] [host6-vm103] this node is locked into cluster UUID [4XdG_9tsR0i8VDWjZlvZDQ] but [cluster.initial_master_nodes] is set to [host6-vm103]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts
[2022-10-28T10:38:53,713][INFO ][o.e.c.s.MasterService    ] [host6-vm103] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {host6-vm103}{UI0R0pBoS9uKzIFv06GM7w}{1oYn-5bkRjSvD2urDK19pw}{host6-vm103}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw} completing election], term: 10, version: 298, delta: master node changed {previous [], current [{host6-vm103}{UI0R0pBoS9uKzIFv06GM7w}{1oYn-5bkRjSvD2urDK19pw}{host6-vm103}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}
[2022-10-28T10:38:53,768][INFO ][o.e.c.s.ClusterApplierService] [host6-vm103] master node changed {previous [], current [{host6-vm103}{UI0R0pBoS9uKzIFv06GM7w}{1oYn-5bkRjSvD2urDK19pw}{host6-vm103}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}, term: 10, version: 298, reason: Publication{term=10, version=298}
[2022-10-28T10:38:53,790][INFO ][o.e.r.s.FileSettingsService] [host6-vm103] starting file settings watcher ...
[2022-10-28T10:38:53,794][INFO ][o.e.r.s.FileSettingsService] [host6-vm103] file settings service up and running [tid=72]
[2022-10-28T10:38:53,797][INFO ][o.e.h.AbstractHttpServerTransport] [host6-vm103] publish_address {91.121.60.116:9200}, bound_addresses {[::]:9200}
[2022-10-28T10:38:53,797][INFO ][o.e.n.Node               ] [host6-vm103] started {host6-vm103}{UI0R0pBoS9uKzIFv06GM7w}{1oYn-5bkRjSvD2urDK19pw}{host6-vm103}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{ml.allocated_processors=12, ml.max_jvm_size=2147483648, ml.machine_memory=8589934592, xpack.installed=true}
[2022-10-28T10:38:53,916][INFO ][o.e.l.LicenseService     ] [host6-vm103] license [f51dd226-a502-49e2-be57-cacc457c0457] mode [basic] - valid
[2022-10-28T10:38:53,920][INFO ][o.e.g.GatewayService     ] [host6-vm103] recovered [6] indices into cluster_state
[2022-10-28T10:38:54,278][ERROR][o.e.i.g.GeoIpDownloader  ] [host6-vm103] exception during geoip databases update
org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active
	at org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:134) ~[?:?]
	at org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:274) ~[?:?]
	at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:102) ~[?:?]
	at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:48) ~[?:?]
	at org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42) ~[elasticsearch-8.4.1.jar:?]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769) ~[elasticsearch-8.4.1.jar:?]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.4.1.jar:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[?:?]
	at java.lang.Thread.run(Thread.java:833) ~[?:?]
[2022-10-28T10:38:54,409][INFO ][o.e.c.r.a.AllocationService] [host6-vm103] current.health="YELLOW" message="Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.ds-.logs-deprecation.elasticsearch-default-2022.09.20-000001][0]]])." previous.health="RED" reason="shards started [[.ds-.logs-deprecation.elasticsearch-default-2022.09.20-000001][0]]"
[2022-10-28T10:38:54,703][INFO ][o.e.i.g.DatabaseNodeService] [host6-vm103] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2022-10-28T10:38:54,781][INFO ][o.e.i.g.DatabaseNodeService] [host6-vm103] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2022-10-28T10:38:55,582][INFO ][o.e.i.g.DatabaseNodeService] [host6-vm103] successfully loaded geoip database file [GeoLite2-City.mmdb]

Thanks

Have a look at the contents of the index that was created. I believe this indicates that you cluster is not secured and accessible from the internat and has been accessed and wiped out by an external party.

How this is possible? How can I check the last command to delete the indice if this was the case?

Thanks

The index that was created is a common sign that someone has accessed it from the internet. Is your cluster secured (given that you are getting index list without auth or HTTPS I would assume it is not)? Which version of Elasticsearch are you using?

I use the last version

{
  "name" : "host6-vm103",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "4XdG_9tsR0i8VDWjZlvZDQ",
  "version" : {
    "number" : "8.4.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "2bd229c8e56650b42e40992322a76e7914258f0c",
    "build_date" : "2022-08-26T12:11:43.232597118Z",
    "build_snapshot" : false,
    "lucene_version" : "9.3.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

Have you configured security?

Is the node bound to a public network interface?

Your instance is indeed available across the internet and not secured. I was able to access it and got this based on the data in the logs:

{
  "name" : "host6-vm103",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "4XdG_9tsR0i8VDWjZlvZDQ",
  "version" : {
    "number" : "8.4.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "2bd229c8e56650b42e40992322a76e7914258f0c",
    "build_date" : "2022-08-26T12:11:43.232597118Z",
    "build_snapshot" : false,
    "lucene_version" : "9.3.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

I recommend you shut this node down immediately and secure it.

Here is the content of the document in the index that was created:

{"took":19,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"read_me","_id":"1","_score":1.0,"_ignored":["message.keyword"],"_source":{"message":"All your data is a backed up. You must pay 0.05 BTC to 12KDdVSHvaB46gGTS7pDiBACyWtx5pv5Hs 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: rambler+3q969@onionmail.org and/or eladb@mailnesia.com and you will receive a link to download your database dump."}}]}}

As you can see it is clear your cluster has been accessed by a third party.

1 Like

Yes thanks for pointing this, I will secure this asap

This might be better right now :wink:

Thanks again for your help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.