How to define custom comma analyzer for a field that will store Persian comma separated words?

up vote
down vote
I have defined one language per field inside a type, one of fields may contain comma separated words in different languages, for example i have a field named as field1 for two languagefield1.en for english and for Persian. So how to define custom comma analyzer for Persian ? We can define custom comma analyzer as bellow in normal :

"analysis" => [
   "tokenizer" => [
      "comma" => [
         "type" => "pattern",
         "pattern" => ","
   "analyzer" => [
      "comma" => [
         "type" => "custom",
         "tokenizer" => "comma"

Suppose in mapping i have a field that should store Persian comma separated words, now how i can define such an anylyzer ? or this comma analyzer will support Persian comma separated words ?

and one another question about elasticsearch, how to solve the problem with Pashto language ? I have not found any way in elastic to support analyzation of pashto texts.