Fuzziness is not given results in Match query

Hi All,

I am using the following query to get the results based in 'id' field..

GET ast/ast_type/_search
{
"query": {
"bool": {
"must": [
{
"match": {
"id":
{
"query": "1",
"type": "phrase_prefix"
}

             }
           },
           {
              "match": {
                  "id":
                {
                    "query": "m",
                    "type": "phrase_prefix"
                }
              }
           }
        ]
    }
},
"sort": [
   {
      "priority": {
         "order": "desc"
      }
   },
   {
      "name": {
         "order": "asc"
      }
   }
]

}

This query is giving the results as shown( expected):
"hits": [
{
"_index": "ast",
"_type": "ast_type",
"_id": "AU6GL-sL3NytkeZBXYKh",
"_score": null,
"_source": {
"id": "mobil 1",
"priority": 1,
"name": "mobil 1"
},
"sort": [
1,
"mobil 1"
]
}
]

As I have only one document with prefix '1' and 'm' in two words I got this result,

But when am adding fuzziness am not getting that response document("mobil 1"):

"must": [
{
"match": {
"id":
{
"query": "1",
"type": "phrase_prefix", "fuzziness":1,"prefix_length":1
}

             }
           },
           {
              "match": {
                  "id":
                {
                    "query": "m",
                    "type": "phrase_prefix","fuzziness":1,"prefix_length":1
                }
              }
           }
        ]

Can anyone please guide me to resolve this issue...

If need these are my mappings:

"mappings": {
"ast_type": {
"properties": {
"id": {
"type": "string",
"analyzer": "analyzer_startswith"
},
"name": {
"type": "string",
"analyzer": "keyword_analyzer"
},
"priority": {
"type": "long"
}
}
}

Where name and priority fields are using for sorting.
And the respective settings:

analysis:
{
"analyzer": {
"keyword_analyzer": {
"filter": "lowercase",
"tokenizer": "keyword"
},
"analyzer_startswith": {
"filter": [
"lowercase"
],
"tokenizer": "whitespace"
},
"wordAnalyzer": {
"type": "custom",
"filter": [
"lowercase",
"asciifolding",
"nGram_filter"
],
"tokenizer": "whitespace"
},
"whitespace_analyzer": {
"type": "custom",
"filter": [
"lowercase",
"asciifolding"
],
"tokenizer": "whitespace"
}
},
"filter": {
"nGram_filter": {
"max_gram": "20",
"min_gram": "1",
"type": "nGram",
"token_chars": [
"letter",
"digit",
"punctuation",
"symbol"
]
}
}
}

Using only two analyzers...