Hi @nutmix
What version?
There are 2 fundamental items to understand your issues
-
What is the mapping for the field you are search? Please share the mapping
-
EXACTLY how are you searching
Are you searching in Discover? If so Exactly how?
Are you searching via DSL (query language?) if so exactly how
Share these items. Perhaps we can help.
You can check the tokenization with
It is probably the standard tokenizer
The standard
tokenizer provides grammar based tokenization (based on the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29) and works well for most languages.
POST /_analyze
{
"analyzer": "standard",
"text": "2025-07-22T13:38:33,460Z INFO http-nio-8080-exec-1 c.x.x.filter.AccessLogFilter [correlationToken:KCC-205ZHEUAGM-659243788] => WebApi AccessLogger: path:/x/getM"
}
You will get back the tokens...
{
"tokens": [
{
"token": "2025",
"start_offset": 0,
"end_offset": 4,
"type": "<NUM>",
"position": 0
},
...
This is the token it is not broken up on : as part of the standard analyzer / tokenizer so they come together
{
"token": "correlationtoken:kcc",
"start_offset": 81,
"end_offset": 101,
"type": "<ALPHANUM>",
"position": 12
},
BUT IT DOES TOKEN on -
{
"token": "205zheuagm",
"start_offset": 102,
"end_offset": 112,
"type": "<ALPHANUM>",
"position": 13
},
{
"token": "659243788",
"start_offset": 113,
"end_offset": 122,
"type": "<NUM>",
"position": 14
},
This is why your searches do not work.
Your option are
- Change the tokenizer or build a custom one
- Change the way you search....
message : "correlationtoken:KCC"
- On ingest replace
:
with a -
or something that will separate the tokens.
Or as Mark offers below... wildcard