Increasing Field Capacity Effectively

Hello,

I am running into an issue with the maximum size for a single field/term.

I set my fields to "not_analyzed".
As you know, it means, "Index this field so it is searchable, but index the value exactly as specified. Do not analyze it." and the maximum size for a single term in the underlying Lucene index is 32766 bytes..

When I try to send over 32766 bytes to my elasticsearch method, that limitation works and do not accept (for example) 40000 characters string.
After my researches, I think I got 2 options;

1-) Continue using the field with "not_analyzed", but use ignore_above setting like below. With this, It will now accept over 32766 bytes but it will not make this field searchable(if it doesnt has to be searchable.).

PUT /my_index
        {
          "mappings": {
            "my_type": {
              "properties": {
                "status_code": {
                  "type": "string",
                  "index": "not_analyzed",
                  "ignore_above": 32766
                }
              }
            }
          }
        }

2-) Will mark the index with "no" option like below. It will accept over 32766 bytes but will not make the field searchable or queryable.

PUT /my_index
{
  "mappings": {
    "my_type": {
      "properties": {
        "status_code": {
          "type": "string",
          "index": "no"
        }
      }
    }
  }
}

My questions are,
-Do these options are able to make cpu or memory impact to my cluster ?
-Do you have any other suggestion for this field capacity?

Thank you for your answers.

UPDATE : I use 1.5.2, so that means, i cant use ignore_above.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.