Hi,
I came across a scenario where one of the account teams that I am working with, need the indexed tokens to behave in 2 different ways. The field receives data in the form -> "D:546789". The field has a "text" datatype in the mapping. This field right now has a path_hierarchy tokenizer with ":" as delimiter. The new requirement is to separate the string at ":", index the tokens "D" and "546789". But, this number part of the token must exhibit all the numeric properties. The token "546789" must behave as a "long" or "integer" type and must be queryable in a range query.
To summarise, is it possible to change the datatype of a text field (that has a number in it) that is analysed to form tokens and make the tokens (numbers) behave as long/integer through the use of any of the token filters or any other means??. The Elasticsearch version is 5.4.2.
Thanks
Aparna