My ElasticSearch Indexes have been mysteriously deleted, how do I debug the cause?

I just checked my ElasticSearch server and the indexes are gone, there is no index remaining. I don't see any information in the /var/logs/elasticsearch and not sure what to look for.

  1. How do I debug what went wrong and how it got deleted?

  2. My elasticsearch is accessible via a public IP on the 9200 port, I haven't done anything to secure it, can it be that? Also, how do I secure it especially when I need to use it myself to call from my own set of servers.

Mostly trying to figure what went wrong and how to prevent it from happening again.

Exposing things to the internet without any sort of security is a Bad Thing.

See Ransom attack on Elasticsearch cluster? and then

Does Elastic log delete index requests anywhere? Like when it happened, and if so, from which IP? I know I need to secure things and working on that, but still would be useful to know when it happened and which IP triggered it. Is there any log for that?

Not by default, you will need to install an audit-log plugin to do that.

Also see for instructions on how to protect your cluster going forward.

I had the same.

But no index with a ransom note :-.

I have had the same thing happen 2 or 3 times over the past few days, twice alone today.
Using version 5.1.1
Additional ES plugins used: discovery-ec2, ingest-attachment, repository-s3

First time preceded by a ResourceLeakDetector

e[36melasticsearch_1  |e[0m [2017-01-15T15:36:12,423][ERROR][i.n.u.ResourceLeakDetector] LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel() See for more information.
e[36melasticsearch_1  |e[0m [2017-01-16T04:16:00,784][WARN ][o.e.c.s.ClusterService   ] [fFMWjXz] cluster state update task [delete-index [[workflow_private0/3t6AaGLsRXi5-jcEYH48Tg], [infrastructure0/oPjqaDXnRN27Lx-e8yEzMw], [content_private0/vk-NPOsTRdSJZjNbyQn6BA], [management/T2OllFSgRv6HPD6LaDfBCQ], [cluster/85CYxbVDQYm6fuYkO2PoGA], [content_public0/PV3oTLY4Q36FMjPKuOhSsw], [workflow_public0/nTIzxcGMQ3a0ifWOe4ta0w], [cpriv_htb/RESMZPU2T9ahDB0bbAE4Sg], [.kibana/3Kqif065T0ii9EpRpYSXjw]]] took [1.4m] above the warn threshold of 30s

Second time

e[36melasticsearch_1  |e[0m [2017-01-16T14:27:53,432][WARN ][o.e.c.s.ClusterService   ] [fFMWjXz] cluster state update task [delete-index [[content_public0/D1dBk1ZbToifU2XHCK9eeQ], [workflow_public0/NW_1-4qiSFi_8UW3UhNdhQ], [infrastructure0/WX1RIWKDS3uKlvgSvEDp8w], [workflow_private0/HM0Ks0xKRdWLhnmPrlCF3g], [management/vp0BgpKGRFKmgzoH38f3Fw], [content_private0/A2JshrHYR1e2Vc8Mw0W_vg], [.kibana/Kuesne1vR0mLb-NMwSDHgg], [cluster/88Iwr1-bTPSr-efMK2g7Ig]]] took [1.4m] above the warn threshold of 30s

Previously this test-environment (admittedly exposed to the internet) had been running fine for months.

About the first message it has been fixed AFAIK in 5.1.2.
The second problem is related to this I think:

No, this appears to be a different and far more serious problem.

Can you provide any details about what was happening on the cluster before you saw this message?

So I had restored a indexes from a 5.0.0-alpha5 s3-repository using that 'restore' function, into an ES instance running 5.1.1. I can send you logs if you give me an email address.

Mostly unsecured elastic servers/public ones are never guaranteed. There can any reason the person with host address can simply delete the indexes. So i suggest, instead of worrying about something after it went wrong secure your cluster of nodes.

Use X-pack introduced for latest elastic version, It had shield to secure & Marvel as head

Also I suggest you to have a backup folder with your indexes. You use script to take backups.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.