HTTP Delete request takes forever

I am currently using the Insomnia REST client.

When I make GET requests to my ElasticSearch server, I receive HTTP responses very promptly.

However, DELETE requests take forever, and never actually finish:

The index I wish to delete currently contains 153 documents. I have only one ElasticSearch node.

Has anyone encountered this before? What can I do to fix the issue?

Can you share the logs from Elasticsearch at around the time you're making this request? If you have multiple nodes, I think the ones from the elected master will be the most useful to start with.

It'd be good to rule out a problem in the client you're using. Can you reproduce this using a different client, e.g. running curl -XDELETE http://localhost:9200/drill from the command line?

Thank you for your reply. I just tried submitting a HTTP Delete request via cURL, and I am still awaiting a response:

Submitting a HTTP Get request posed no issues; I received a response very quickly:

I just looked into my {extract.path}/logs folder, and could not find any logs from Elasticsearch around the time I was making the DELETE requests (from using Insomnia and using cURL). Please let me know whether I am looking in the wrong place.

Hi Miao,

Can you please tell us which Elasticsearch version you are using?

Also, can you try with:

curl -v -XDELETE http://localhost:9200/drill

and

curl -v -XDELETE http://localhost:9200/does_not_exist

please?

@tanguy, thank you for your reply.

This was the feedback after running curl -v -XDELETE http://localhost:9200/drill:

PS C:\Users\Me\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe -v -XDELETE http://localhost:9200/drill
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 9200 (#0)
> DELETE /drill HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.64.1
> Accept: */*

This was the feedback after running curl -v -XDELETE http://localhost:9200/does_not_exist:

PS C:\Users\Me\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe -v -XDELETE http://localhost:9200/does_not_exist
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 9200 (#0)
> DELETE /does_not_exist HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< content-type: application/json; charset=UTF-8
< content-length: 413
<
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [does_not_exist]","resource.type":"index_or_alias","resource.id":"does_not_exist","index_uuid":"_na_","index":"does_not_exist"}],"type":"index_not_found_exception","reason":"no such index [does_not_
exist]","resource.type":"index_or_alias","resource.id":"does_not_exist","index_uuid":"_na_","index":"does_not_exist"},"status":404}* Connection #0 to host localhost left intact
* Closing connection 0

I am using ElasticSearch version 7.0.1.

Thanks!

Sadly, I can't reproduce the issue locally using the following versions:

└─ $ ▶ curl --version
curl 7.52.1 (x86_64-pc-linux-gnu) 

└─ $ ▶ curl -XGET 'http://localhost:9200?filter_path=version.number'
{
  "version" : {
    "number" : "7.0.1"
  }
}

A first deletion returns a 404 as expected:

└─ $ ▶ curl -v -XDELETE 'http://localhost:9200/drill'
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 9200 (#0)
> DELETE /drill HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< content-type: application/json; charset=UTF-8
< content-length: 359
< 
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [drill]","resource.type":"index_or_alias","resource.id":"drill","index_uuid":"_na_","index":"drill"}],"type":"index_not_found_exception","reason":"no such index [drill]","resource.type":"index_or_alias","resource.id":"drill","index_uuid":"_na_","index":"drill"},"status":404}

And if I create the drill index :

└─ $ ▶ curl -XPUT 'http://localhost:9200/drill'
{"acknowledged":true,"shards_acknowledged":true,"index":"drill"}

Then the index deletion works correctly:

└─ $ ▶ curl -v -XDELETE 'http://localhost:9200/drill'
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 9200 (#0)
> DELETE /drill HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 21
< 
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact
{"acknowledged":true}

I saw that you are using PowerShell, I suspect that your issue is related to curl being an alias to the Invoke-WebRequest cmdlet. You should try using the cmdlet directly or using a different shell on Windows.

One difference between the two requests is that the GET reads from an existing index while the DELETE will require a cluster state update. If you can not find any issue with the client, e.g. by running the commands from Kibana it may be worthwhile providing some information about the size of your cluster and the number of indices and shards it holds. I would also check Elasticsearch logs for signs of long or frequent GC.

Thank you very much to everyone for trying to help me.

I restarted the server hosting my ElasticSearch node, and now all HTTP requests are processed promptly and correctly.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.