Frustrating error: Could not locate that index-pattern (id: false), click here to re-create it

There is definitely a bug in version 6.3.x..

While creating an index pattern (after successfully matching with an index), regardless of what timestamp I choose (or not at all): I get the following

Could not locate that index-pattern (id: false), click here to re-create it

Clicking does nothing, of course

I have lost 2 days of extremely valuable time with this error, when creating a new index pattern, it is unbelievable as I did nothing different than the previous one and I only have 3 patterns currently.

Let me add that the error is rather unhelpful, and that nothing is logged nowhere.
I have tried virtually everything to resolve, and it's simply unfixable, I deleted the indices, I reset filebeat, I changed loggers, nothing... I can't create an index pattern.

Can anyone from the team let us know WHAT IS HAPPENING when this error pops out? Logging is so poor to debug.

I know this topic has been posted again (twice) but look for yourselves there is no solution, or official acknowledgment that this is a bug!



Are there any errors in your Kibana or Elasticsearch logs? Or anything in the browser console?

Also, can you try and give us more details? are you using cross-cluster? are you using wildcards on the index pattern creation?


@Marius_Dragomir Verbose log showed nothing!
I fixed it, possibly by running the command to make sure all indices are read/write.

At an earlier point, my hard drive was filling and indices got read only, but I extended it and did the current ones (not closed indices of rotated days) read/write. Trying to create an index pattern for a newer read/write index would pop the id:false error and log nothing on the vebose log so I insist that this is a bug, readonly indices had nothing to do with my index pattern and logging is tragic.. What is "id: false" even supposed to mean??

@ Team

I would suggest you make a note of this, to improve in a future release, either kill the bug, or improve logging!

Any more info on what indices you had closed? I'm trying now with 1 of the 20 indices that match my pattern as closed and it does create an index pattern while just skipping the closed one.

Did you have one of the system indices as closed?

Is any of your indexes read-only in elasticsearch?

I'm not sure they were "closed" or if we mean the same thing. When I say closed I mean indexes of previous days like wazuh-2018-30-07, with todays running index being wazuh-2018-31-07.

If you are trying to reproduce the error, maybe make sure that at least one index has gone into readonly mode.
This is the behaviour when HDD goes full.

OK, i'll keep looking into this and run it in VM to reproduce the full HDD behaviour.
Steps are something like this, I assume:

  1. Get a bunch of indices that fit in an index pattern.
  2. Fill the HDD.
  3. Clear space or extend the partition.
  4. Try and create a new index pattern?

Let's say I have the following indices:


a)Space fills up, indices turn read-only
b)I extend LVM partition,
c) I make each index read/write by running the following:

PUT /index1*/_settings
"index.blocks.read_only_allow_delete": null

PUT /index2*/_settings
"index.blocks.read_only_allow_delete": null

I might have used auto-complete instead of "*" for index names here

Logging continues beautifully.

d) After that I create a new index3

e)Then I go to make index pattern and get the id:false error.

f) After two days I try this:

PUT /*/_settings
"index.blocks.read_only_allow_delete": null

Note the * in index name

This gets it fixed.

Managed to reproduce the issue locally, thanks for following up. Will investigate to see what the root cause of it and create an issue for it.

Found the root cause as well: the .kibana index was also marked as Read-only so this is why it failed when creating an index pattern (as they are saved in the .kibana index). Will see what can be done to improve this. At least we will have a message and check for index to not be read-only.

1 Like

There was an issue about something like that we didn't get to the bottom of it, but now we got the root cause. You can follow it here if you're interested about it.

I'm glad I've helped! Hope to see change integrated in future release.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.