Elasticsearch Index Completely Missing

Hey there,

I've had an elasticsearch cluster running since late June. My search wasn't working today and when I looked at the app I was getting this error:
elasticsearch.exceptions.NotFoundError: TransportError(404, 'index_not_found_exception', 'no such index')

When I went to Kibana, other than saying my license had expired (for x-pack, monitoring, etc) ... there was no messages. It was asking me to create a new index because it couldn't find any ... the only one it found was for an index called README.

When I ran LSBLK, I could see the drives were still mounted but only had 194 MB of data (used to be close to 300-500gb). All 3 nodes don't have their data anymore. It's just gone.

When I looked at the server's keyboard commands nothing unusual and the login activity shows it was only me. Does anyone have any idea how I can find out where the data went or how I can recover it?

I looked at the cluster health and it shows that there are 3 unassigned shards ... but with all 3 nodes online and with empty hard drives where could they be? I have always had automatic shard allocation enabled.

... There has been NO data recently indexed and I'm sure no one our end deleted the index or altered it any way ... we definitely didn't accidentally delete the index from the command line. There have only been queries to the index using the python elasticsearch client.

I don't suppose this cluster is exposed, unprotected, to the internet is it?

we have been running x-pack from the beginning and have never shared our endpoints publicly or our ES logins.

What license level though?

we're running es 6.3.0

our trial just expired 2-3 days ago I believe ... so we do not have a license (?) not sure what you mean by license level

There's a few levels - https://www.elastic.co/subscriptions

If you were running a basic license, then that doesn't include Security. And based on that README index you mentioned, someone has probably found your cluster while browsing on the internet, deleted all your indices, and then tried to ransom you for the backups.

so the ES, Kibana, and other passwords initially generated when we were creating the index were not real? The ES server was exposed the whole time?

I can't comment on that as I don't know what you did.

But, again, based on that index that does exist it seems my explanation is a possible outcome as we've seen similar things in the past. I would suggest you look in that index, there may a document in there with more info.

So if you are using xpack with basic username and password setup. And your licence expiries what happens?

you're right, inside of the index is this message:

t _id 1
t _index readme
# _score 1
t _type howtogetmydataback
t btc ALL YOUR INDEX AND ELASTICSEARCH DATA HAVE BEEN BACKED UP AT OUR SERVERS, TO RESTORE SEND 0.1 BTC TO THIS BITCOIN ADDRESS ******** THEN SEND AN EMAIL WITH YOUR SERVER IP, DO NOT WORRY, WE CAN NEGOCIATE IF CAN NOT PAY
t mail ********
t note ******

but what I don't understand is how come we had to connect to ES with a password? How come we had Elasticsearch, Kibana, and Logstash accounts created? How come we had to login to the Kibana panel too? If we didn't have x-pack security the whole time?

here's how we originally generated the logins:
sudo /usr/share/elasticsearch/bin/x-pack/setup-passwords auto

I believe our version of ES was updated a week after we created the index ... this new version automatically included x-pack but we were still using the generated logins to access Elasticsearch data/Kibana

Do you mind if we reach out to you directly to discuss this further?

1 Like

yes please

Can you please tell us how it ends ? how they did ? It's interessting to know the technics they used and trying to secure/avoid them.

I know this won't help to protect the privacy of your data when ES is open to the internet as intruders can still read, but it is highly advised to set this setting to true to make it as hard as possible to delete the entire cluster at least not by one command:

In order to disable allowing to delete indices via wildcards or _all, set action.destructive_requires_name setting in the config to true. This setting can also be changed via the cluster update settings API.

You can (should) restrict DELETE from your reverse proxy (Nginx or Apache) as well.

Also, I think the Trial products are best to be tested in a development setup where the rollback on expired products is more manageable than in a production.

I hope you have made some snapshots.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.