502 Timeout with Delete by query

Hello there !

I have a problem when I run this query:

`POST /logs-nginx.cpcu.prod/_delete_by_query

{

  "query": {

    "range": {

      "@timestamp": {

        "lt": "now-30d"

      }

    }

  }

}

Basically I would like to delete documents older than 30 days present in one of my indices without deleting the index. But I have a 502 timeout error.
Do you have an idea ?

Elasticsearch never returns 502, so this response must be coming from something else. Send the request directly to Elasticsearch instead.

1 Like

`{

"statusCode": 502,

"error": "Bad Gateway",

"message": "Client request timeout"

}`
I entered this request in dev tools, maybe it's kibana which returns the error. But from what I see it's a timeout problem. Do you know if I can configure the timeout duration in the request?

`{

"statusCode": 502,

"error": "Bad Gateway",

"message": "Client request timeout"

}`

It's not Elasticsearch which is timing out so there's no Elasticsearch configuration change that will help here. If it's Kibana then I expect it is indeed configurable, it'd be best to ask in the Kibana forum for help with that.

Where is your Elasticsearch cluster deployed? Do you have any load balancer or proxy between Kibana and Elasticsearch?

My cluster is deployed on an AWS EC2 and containerized with Docker. I have a proxy that is initialized on rancher. When i try to delete_by_query another indice, it works. Maybe the problem is that the indices are too large

A smaller index will mean the delete-by-query runs faster for sure, but IMO the fundamental problem is that there's something in your system which imposes an unnecessarily restrictive timeout. Elasticsearch can run jobs like a delete-by-query over enormous indices, with the process taking hours or days.

1 Like