I'm writing a script using the elasticsearch node JS API for bulk adding indices. Some of these indices may already exist, so I first do a check to see if the index already exists and delete it prior to recreating the index. When this situation occurs, the delete succeeds but when creating the index I often get a 400 Bad Request
error on the subsequent index creation.
Code:
function createIndex() {
// Check if index exists
const exists = await client.indices.exists({
index: 'foo-index',
}),
if (exists.body === true) {
// Delete index if it exists
const deleteResult = await client.indices.delete({
index,
});
}
// Re-create index
// This often throws error if the delete had just occurred
const createResult = await client.indices.create({
index: 'foo-index',
body: { ... },
});
}
Error message:
{
"name": "ResponseError",
"meta": {
"body": "400 Bad Request",
"statusCode": 400,
"headers": {
"content-type": "text/plain; charset=utf-8",
"connection": "close"
},
"warnings": null,
"meta": {
"context": null,
"request": {
"params": {
"method": "PUT",
"path": "/foo-index",
"body": {...},
"querystring": "",
"headers": {
"User-Agent": "elasticsearch-js/7.4.0 (linux 4.4.0-18362-Microsoft-x64; Node.js v10.14.2)",
"Content-Type": "application/json",
"Content-Encoding": "gzip",
"Accept-Encoding": "gzip,deflate"
},
"timeout": 30000
},
"options": {
"warnings": null
},
"id": 21
},
"name": "elasticsearch-js",
"connection": {...},
"attempts": 0,
"aborted": false
}
}
}
I thought this might be a timing issue, so I added a 1500ms delay between the delete and the index creation. That didn't seem to make a difference. I ran a separate script that did the bulk delete prior to running this script, meaning the delete would always get skipped in this script, and everything ran fine. That's not a great long term solution for us so I'd still like to figure out why this isn't working. Any help would be greatly appreciated!