The default keyword mapping ignores strings longer than 256 characters, silently dropping values from the list of indexed terms.
Note that if you do raise the Elasticsearch limit, you cannot exceed the hard Lucene limit of 32k for a single token
If you’re searching for exact large values (as opposed to parts of large values) then it may be more efficient to index hashes of the content and deal with the rare issue of false positives in your client.