While creating an index pattern (after successfully matching with an index), regardless of what timestamp I choose (or not at all): I get the following
Could not locate that index-pattern (id: false), click here to re-create it
Clicking does nothing, of course
I have lost 2 days of extremely valuable time with this error, when creating a new index pattern, it is unbelievable as I did nothing different than the previous one and I only have 3 patterns currently.
Let me add that the error is rather unhelpful, and that nothing is logged nowhere.
I have tried virtually everything to resolve, and it's simply unfixable, I deleted the indices, I reset filebeat, I changed loggers, nothing... I can't create an index pattern.
Can anyone from the team let us know WHAT IS HAPPENING when this error pops out? Logging is so poor to debug.
I know this topic has been posted again (twice) but look for yourselves there is no solution, or official acknowledgment that this is a bug!
@Marius_Dragomir Verbose log showed nothing!
I fixed it, possibly by running the command to make sure all indices are read/write.
At an earlier point, my hard drive was filling and indices got read only, but I extended it and did the current ones (not closed indices of rotated days) read/write. Trying to create an index pattern for a newer read/write index would pop the id:false error and log nothing on the vebose log so I insist that this is a bug, readonly indices had nothing to do with my index pattern and logging is tragic.. What is "id: false" even supposed to mean??
@ Team
I would suggest you make a note of this, to improve in a future release, either kill the bug, or improve logging!
Any more info on what indices you had closed? I'm trying now with 1 of the 20 indices that match my pattern as closed and it does create an index pattern while just skipping the closed one.
Is any of your indexes read-only in elasticsearch?
I'm not sure they were "closed" or if we mean the same thing. When I say closed I mean indexes of previous days like wazuh-2018-30-07, with todays running index being wazuh-2018-31-07.
If you are trying to reproduce the error, maybe make sure that at least one index has gone into readonly mode.
This is the behaviour when HDD goes full.
Found the root cause as well: the .kibana index was also marked as Read-only so this is why it failed when creating an index pattern (as they are saved in the .kibana index). Will see what can be done to improve this. At least we will have a message and check for index to not be read-only.
There was an issue about something like that we didn't get to the bottom of it, but now we got the root cause. You can follow it here if you're interested about it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.