Highlighting Offset Issue with Vietnamese

We are using ElasticSearch 2.2.0 with the compatible Vietnamese Plugin.
When I run the following query:

{
  "query": {
    "bool": {
      "should": {
        "multi_match": {
          "query": "chữ",
          "fields": [
            "message",
            "user"
          ],
          "analyzer": "default_search"
        }
      }
    }
  },
  "highlight": {
    "pre_tags": [
      "<b>"
    ],
    "post_tags": [
      "</b>"
    ],
    "fragment_size": 0,
    "number_of_fragments": 0,
    "require_field_match": false,
    "fields": {
      "message": {},
      "user": {}
    }
  }
}

I get the following result, where the highlighting offsets are wrong and the tags wrap the wrong words, although the actual search did work and retrieved the right document:

{
    "took" : 14,
    "timed_out" : false,
    "_shards" : {
        "total" : 200,
        "successful" : 200,
        "failed" : 0
    },
    "hits" : {
        "total" : 1,
        "max_score" : 0.030578919,
        "hits" : [{
                "_index" : "fts-vietnamese",
                "_type" : "Document",
                "_id" : "AVKcb6Xy0-uCokJzleqC",
                "_score" : 0.030578919,
                "_source" : {
                    "streamId" : 1,
                    "language" : "vietnamese",
                    "message" : "Có một vấn đề là khi sent text messages dùng tiếng Việt hoặc email qua người khác, chữ tiếng Việt bị mất dấu hoặc mất chữ. Chẳng hạn như chữ “ôm” thì thành",
                    "doc_id" : "VietnameseWords"
                },
                "highlight" : {
                    "message" : [
                        "Có một vấn đề là khi sent text messages dùng tiếng Việt hoặc email <b>qua</b> người khác, chữ tiếng V<b>iệt</b> bị mất dấu h<b>oặc</b> mất chữ. Chẳng hạn như chữ “ôm” thì thành"
                    ]
                }
            }
        ]
    }
}

@Duy_Do who created the Vietnamese plugin thinks that the problem is in the ES highlight parser that is not working properly with Vietnamese characters.
Does anyone have a solution to this?
Thanks!

The analyzer should do the correct calculation of the offsets of each term.
In this particular case, using GET /index/type/doc_id/_termvectors?pretty=true&fields=message to see the offsets, they are incorrect:

            "chữ": {
               "term_freq": 3,
               "tokens": [
                  {
                     "position": 17,
                     "start_offset": 67,
                     "end_offset": 70
                  },
                  {
                     "position": 33,
                     "start_offset": 94,
                     "end_offset": 97
                  },
                  {
                     "position": 42,
                     "start_offset": 110,
                     "end_offset": 113
                  }
               ]
            }

And this is the reason why the highlighting fails. So, the analyzer/tokenizer should properly take care of the offsets.

Thank you!