SQL indexes with dashes fail with high level java client (escaped with quotes)

I am using the high-level java client to submit SQL requests to our elastic cluster.
I'm seeing an issue where if the index name has a "dash" in it (while enclosed in quotes), the request cannot be parsed by the server. Although, the request does work from Kibana when escaped with quotes.

This is easy to reproduce.

Here is my code:

	RestClient restClient = RestClient.builder(
               new HttpHost("elastic-vm", 9200, "http")).build();
         
        Request request = new Request("POST",  "/_sql");

        //***  this fails with double quotes ***
        request.setJsonEntity("{\"query\":\"SELECT * FROM \"test-99\" limit 10\"}");

        Response response = restClient.performRequest(request);
        String responseBody = EntityUtils.toString(response.getEntity()); 
        
        System.out.println(responseBody);
        restClient.close();

EXCEPTION...

org.elasticsearch.client.ResponseException: method [POST], host [http://jag1-vm:9200], URI [/_sql], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"x_content_parse_exception","reason":"[1:10] [sql/query] failed to parse object"}],"type":"x_content_parse_exception","reason":"[1:10] [sql/query] failed to parse object","caused_by":{"type":"json_parse_exception","reason":"Unexpected character ('b' (code 98)): was expecting comma to separate Object entries\n at [Source: (org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 27]"}},"status":400}
	at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:283)
	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:261)
	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)

Build info:

  "version" : {
    "number" : "7.9.1",
    "build_flavor" : "default",
    "build_type" : "zip",
    "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91",
    "build_date" : "2020-09-01T21:22:21.964974Z",
    "build_snapshot" : false,
    "lucene_version" : "8.6.2",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },

Not sure if the client can do any further processing, but the JsonEntity looks like {"query":"SELECT * FROM "test-99" limit 10"}, which is incorrect JSON. This might work:
request.setJsonEntity("{\"query\":\"SELECT * FROM \\\"test-99\\\" limit 10\"}");

Thank you for catching my bug! The request now works with triple backslashes.

FYI: I did have a hack/workaround...I created a tmp alias to point to the sql index, queried it, and deleted the alias for each request - not bad performance. But probably not going to handle a lot of load.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.