Special char is parsing but not searching

I have defined my analyzer as below:
"analysis": {
"analyzer": {
"my_ngram_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "my_ngram_tokenizer"
}
},
"tokenizer": {
"my_ngram_tokenizer": {
"token_chars": [
"letter",
"digit",
"punctuation",
"symbol",
"custom"
],
"custom_token_chars": [
"#"
],
"min_gram": "1",
"type": "ngram",
"max_gram": "2"
}
}
},

And I did a test to analyse the special character within my data:
POST .ds-os-logs-test-2024.03.19-000027/_analyze
{
"analyzer": "my_ngram_analyzer",
"text": "## For ObjectID:332270810058 getting trace BV ID:212318 BV Value:337040313158 Found in Cache?:T ----- PL/SQL Call Stack -----"
}

{
"tokens": [
{
"token": "#",
"start_offset": 0,
"end_offset": 1,
"type": "word",
"position": 0
},
{
"token": "##",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 1
},
{
"token": "#",
"start_offset": 1,
"end_offset": 2,
"type": "word",
"position": 2
},
{
"token": "f",
"start_offset": 3,
"end_offset": 4,
"type": "word",
"position": 3
},
{
"token": "fo",
"start_offset": 3,
"end_offset": 5,
"type": "word",
"position": 4
},
{
"token": "o",
"start_offset": 4,
"end_offset": 5,
"type": "word",
"position": 5
},
{
"token": "or",
"start_offset": 4,
"end_offset": 6,
"type": "word",
"position": 6
},
.
.
.
.

But when I search it in kibana discover panel in the search bar as os.log.message : "##" or ## or '/#/' or "#", I got no records.

My os.log.message field is of type 'text', I cannot change it to 'keyword' as the wildcard searching and even searching for numbers becomes difficult. I need it to be in 'text'.

Now that I know my analyser is accepting # why is it not searching when done on kibana discover panel?

@elastic_team kindly advise.