Token Chars Mapping to Ngram Filter ElasticSearch NEST

I'm trying to replicate the below mappings using NEST and facing an issue while mapping the token chars to the tokenizer.

{
   "settings": {
      "analysis": {
         "filter": {
            "nGram_filter": {
               "type": "nGram",
               "min_gram": 2,
               "max_gram": 20,
               "token_chars": [
                  "letter",
                  "digit",
                  "punctuation",
                  "symbol"
               ]
            }
         },
         "analyzer": {
            "nGram_analyzer": {
               "type": "custom",
               "tokenizer": "whitespace",
               "filter": [
                  "lowercase",
                  "asciifolding",
                  "nGram_filter"
               ]
            }
         }
      }
   }

I was able to replicate everything except the token chars part. Can some one help in doing so. Below is my code replicating the above mappings. (except for the token chars part)

 var nGramFilters1 = new List<string> { "lowercase", "asciifolding", "nGram_filter" };
 var tChars = new List<string> { "letter", "digit", "punctuation", "symbol" };

    var createIndexResponse = client.CreateIndex(defaultIndex, c => c
                 .Settings(st => st
                 .Analysis(an => an
                 .Analyzers(anz => anz
                 .Custom("nGram_analyzer", cc => cc
                 .Tokenizer("whitespace").Filters(nGramFilters1)))
               .TokenFilters(tf=>tf.NGram("nGram_filter",ng=>ng.MinGram(2).MaxGram(20))))));

References:
SO Question
Github Issue