Deleting and recreating index sometimes gives 400 Bad Request error

I'm writing a script using the elasticsearch node JS API for bulk adding indices. Some of these indices may already exist, so I first do a check to see if the index already exists and delete it prior to recreating the index. When this situation occurs, the delete succeeds but when creating the index I often get a 400 Bad Request error on the subsequent index creation.


function createIndex() {
  // Check if index exists
  const exists = await client.indices.exists({
    index: 'foo-index',

  if (exists.body === true) {
    // Delete index if it exists
    const deleteResult = await client.indices.delete({

  // Re-create index 
  // This often throws error if the delete had just occurred
  const createResult = await client.indices.create({
    index: 'foo-index',
    body: { ... },

Error message:

  "name": "ResponseError",
  "meta": {
    "body": "400 Bad Request",
    "statusCode": 400,
    "headers": {
      "content-type": "text/plain; charset=utf-8",
      "connection": "close"
    "warnings": null,
    "meta": {
      "context": null,
      "request": {
        "params": {
          "method": "PUT",
          "path": "/foo-index",
          "body": {...},
          "querystring": "",
          "headers": {
            "User-Agent": "elasticsearch-js/7.4.0 (linux 4.4.0-18362-Microsoft-x64; Node.js v10.14.2)",
            "Content-Type": "application/json",
            "Content-Encoding": "gzip",
            "Accept-Encoding": "gzip,deflate"
          "timeout": 30000
        "options": {
          "warnings": null
        "id": 21
      "name": "elasticsearch-js",
      "connection": {...},
      "attempts": 0,
      "aborted": false

I thought this might be a timing issue, so I added a 1500ms delay between the delete and the index creation. That didn't seem to make a difference. I ran a separate script that did the bulk delete prior to running this script, meaning the delete would always get skipped in this script, and everything ran fine. That's not a great long term solution for us so I'd still like to figure out why this isn't working. Any help would be greatly appreciated!

As you may have already surmised, attempting to create an index that already exists returns an HTTP 400. So, it seems that either the there are deletes failing, or you aren't even attempting to delete all the indices you think you are. (In the provided code, it's clear that the second scenario isn't happening, as it's a single index name hard coded; by the same token, it's clear that this isn't the actual code, so in the actual code, it's still possible.)

I was start with adding more checks and visibility in your code. Check the response from the attempted deletion each time. Add a data structure where you store index names that have been successfully deleted, then check whether index names you try to delete are contained in that data structure.

Thanks Glen. That's a good thought on the added checks and I'll get going with those.

The 400 error kinda threw me because I've ran into index already existing before and it'll return index_already_exists_exception in the error response body. That isn't the case with these errors which made me think it was something else.

I added a check to ensure each deletion was acknowledged before continuing. All of my delete operations are running successfully, but the index recreation is still erroring out inconsistently with a 400 bad request. The error message body does not contain any info about the index already existing. It's just a plain 400 with no additional info.

Can you provide the actual code used to repro and the application logs from a run where this happens?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.