I made an "Index _settings update" by doing the following:
- POST to /index/_close
- PUT to /index/_settings
- POST to /index/_open
The server gave me an "error" in the log since the format was wrong - but it also saved the update, thereby corrupting the index settings in an "irreversible" way.
The only option at this point would be - create new index, fix existing index setting until we can "open" it again, and then bulk api copy it to a new index.
Are there ways for "recovering' from the situation other than recreating the index? (Especially if the said analyzers are not yet in use?)
P.S. Purpose is to learn - since in this case, i have mostly test data where losing is a pain - but not end of the world. Doing this on production would be bad!
Details follow:
Settings update:
{
"analysis": {
"filter": {
"synonym": {
"type": "synonym",
"format": "wordnet",
"synonyms_path": "analysis/wn_s.pl"
},
"ascii_folding": {
"type": "asciifolding"
},
"analyzer": {
"fulltext_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"word_delimiter",
"synonym"
]
}
}
}
}
}
So far so good. But on closer look, I messed up - I added "analyzer" under "filter".
And the error was:
org.elasticsearch.ElasticsearchIllegalArgumentException: token filter [analyzer] must have a type associated with it
But the problem is - now I can't 'remove' the entry of 'analyzer'. my only option is to add 'type: standard' and live with the corrupted setting in the index.
The following would have helped avoid the situation:
- The docs could warn that most "settings API updates" are irreversible (You can only "add" a new dictionary but never remove - even new fields would only "merge the fields" with existing fields)
- With server-side validation of the JSON - or just not save if the
settings API had an 'error'