Failed to complete action: delete_indices

Hello All,

I was receiving this error upon executing a delete_indices action.

Failed to complete action: delete_indices. <class 'curator.exceptions.FailedExecution'>: Exception encountered. Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: TransportError(400, u'illegal_argument_exception', u'[-U9WmRx][x.x.x.x:9300][indices:admin/delete]')

Is there anyone who can help me to find the reason why this is being encountered. Thank you in Advance.

A 400 error indicates that the client made an illegal request (hence the illegal_argument_exception).

The bigger concern to me is that it is a TransportError and the port listed after the IP is 9300, which is the Transport Protocol, and not the normal HTTP/REST protocol which Curator uses to interact with Elasticsearch. This suggests to me that Curator successfully made a request to a node (using HTTP), but the node-to-node traffic replied with that error. indices:admin/delete suggests that the user Curator is connecting as does not have the proper credentials to perform the delete.

These are my off-the-top-of-my-head speculations. Without more debug logging, I won't be able to do more. You may even need to set blacklist: [] (in addition to loglevel: DEBUG) to not hide the elasticsearch request log lines.

Hello Aaron,

We encountered another error which is almost the same but with new kind of error:

Failed to complete action: delete_indices. <class 'curator.exceptions.FailedExecution'>: Exception encountered. Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: ConnectionTimeout caused by - ReadTimeoutError(HTTPSConnectionPool(I_REMOVED_THIS_PART', port=443): Read timed out. (read timeout=30))

By the way, we are using AWS managed services so we cannot create a user in AWS elasticsearch

(not directly related):

Did you look at cloud.elastic.co and https://aws.amazon.com/marketplace/pp/B01N6YCISK ?

HI David,

Our instances are purely in AWS now so I'm afraid this will not be approved.

The long story short version is that the HTTP pool you are connecting to has a timeout value. If you set the timeout higher than 30 in your client definition (I understand you're using a lambda), and that still happens, then there's nothing I can do for you.

The timeout value goes into where you setup the client, e.g. client = elasticsearch.Elasticsearch(host=xxx, port=xxx, timeout=30).

Cloud by elastic is running on AWS as well. What do you mean?