Elasticsearch doc update fails with illegal_argument_exception

Hi,
We are updating Elastic document using ES python client, and facing below issue during the update.

TransportError(400, u'illegal_argument_exception', u'[93f58351-0db1-423e-ae5c-ebb1c3dea103][172.17.0.2:9300][indices:data/write/update[s]]')

Update is being done using scripts (painless lang), We tried many times this update on same kind of documents and it fails for random data. But when we run the same script which is failed to update manually (using REST client), it works fine.
Below is the sample document which we are trying to update.

{
	"took": 0,
	"timed_out": false,
	"_shards": {
		"total": 5,
		"successful": 5,
		"skipped": 0,
		"failed": 0
	},
	"hits": {
		"total": 1,
		"max_score": 2.4849067,
		"hits": [
			{
				"_index": "nexus",
				"_type": "nexus",
				"_id": "QQXOqGIBFnY0FvUDvj8x",
				"_score": 2.4849067,
				"_source": {
					"timestamp": "2018-04-09 08:09:03",
					"artifacts": [
						{
							"artifact_type": "pom",
							"group_id": "com.core",
							"artifact_id": "product_bom_reactor",
							"artifact_version": "test_225"
						},
						{
							"artifact_type": "pom",
							"group_id": "com.core",
							"artifact_id": "product_sdk",
							"artifact_version": "test_225"
						},
						{
							"artifact_type": "jar",
							"group_id": "com.core",
							"artifact_classifier": "jacoco-full-report",
							"artifact_id": "product_ut",
							"artifact_version": "test_225"
						},
						{
							"artifact_type": "ear",
							"group_id": "com.core",
							"artifact_classifier": "rm_sec",
							"artifact_id": "product_ear",
							"artifact_version": "test_225"
						},
						{
							"artifact_type": "ear",
							"group_id": "com.core",
							"artifact_classifier": "rm_nosec",
							"artifact_id": "product_ear",
							"artifact_version": "test_225"
						},
						{
							"artifact_type": "pom",
							"group_id": "com.core",
							"artifact_id": "product_bom",
							"artifact_version": "test_225"
						},
						{
							"artifact_type": "pom",
							"group_id": "com.core",
							"artifact_id": "product_reactor",
							"artifact_version": "test_225"
						}
					]
				}
			}
		]
	}
}

For each element in _source.artifacts below script runs, which updates the elements with new fields,

if artifact_classifier is None:
    body = {
        "script": {
            "source": "for(int i=0; i<ctx._source.artifacts.length; i++) { if(ctx._source.artifacts[i].group_id == '" + group_id + "' && ctx._source.artifacts[i].artifact_id == '" + artifact_id + "' && ctx._source.artifacts[i].artifact_version == '" + artifact_version + "' && ctx._source.artifacts[i].artifact_type == '" + artifact_type + "'){ctx._source.artifacts[i].is_deleted = '" + str(
                is_deleted) + "'; ctx._source.artifacts[i].deletion_timestamp = '" + str(
                now) + "'}}",
            "lang": "painless"
        }
    }
else:
    body = {
        "script": {
            "source": "for(int i=0; i<ctx._source.artifacts.length; i++) { if(ctx._source.artifacts[i].group_id == '" + group_id + "' && ctx._source.artifacts[i].artifact_id == '" + artifact_id + "' && ctx._source.artifacts[i].artifact_version == '" + artifact_version + "' && ctx._source.artifacts[i].artifact_type == '" + artifact_type + "' && ctx._source.artifacts[i].artifact_classifier == '" + artifact_classifier + "'){ctx._source.artifacts[i].is_deleted = '" + str(
                is_deleted) + "'; ctx._source.artifacts[i].deletion_timestamp = '" + str(
                now) + "'}}",
            "lang": "painless"
        }
    }

Environment :
Elasticsearch, PythonClient 6.1.1

Why do we get this error randomly ?

Thanks in Advance,

Hey,

this is super hard to tell without a full error message. Do you have full access to the error message, when the error response returns?

A wild guess could be, that you are running at capacity in your thread pools, so that some operations get rejected and thus they return an error - but this error would be independent from the operation that was supposed to be executed, so seemingly random.

An error message would be more worth than a thousand words here :slight_smile: - I am not python client user, so I cannot tell how to access that one.

--Alex

Hey Alex,
Yes i agree error message is important, but we get only the above mentioned error returned by python client. Is there any other way to get full error ?

Thanks.

Also, "that you are running at capacity in your thread pools" what does it mean by capacity here ?
is it something configurable ?
Operation fails in both multi threaded/no thread runs.

OK, after enabling logs and analyzing them found that below is the actual cause of the issue.

org.elasticsearch.transport.RemoteTransportException: [p6UQ1br][10.32.0.1:9300][indices:data/write/update[s]]

Caused by: java.lang.IllegalArgumentException: failed to execute script
Caused by: org.elasticsearch.script.GeneralScriptException: Failed to compile inline script [for(int i=0; i<ctx._source.artifacts.length; i++) { if(ctx._source.artifacts[i].group_id == 'com.core' && ctx._source.artifacts[i].artifact_id == 're_ut' && ctx._source.artifacts[i].artifact_version == 'test_233' && ctx._source.artifacts[i].artifact_type == 'jar' && ctx._source.artifacts[i].artifact_classifier == 'ut-report'){ctx._source.artifacts[i].is_deleted = 'True'; ctx._source.artifacts[i].deletion_timestamp = '2018-04-09 16:38:00'}}] using lang [painless]

Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting

Fixed it by using params in the script instead of hardcoding as suggested here https://www.elastic.co/guide/en/elasticsearch/reference/6.x/modules-scripting-using.html#prefer-params
and it is working perfect now.

Thanks.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.