I'm upgrading from ES 1.7.1 to ES 2.0.0, before I defined my analyzers in elasticsearch.yml file of ES. The definitions are as below (I also tried index.analysis.analyzer.adefault: instead of indentation):
index: 
  analysis: 
    analyzer: 
      adefault: 
        char_filter: html_strip
        filter: "standard, lowercase"
        tokenizer: standard
        type: custom
      alang1: 
        char_filter: html_strip
        filter: "standard, lowercase, fstopde, fstemde"
        tokenizer: standard
        type: custom
      alang2: 
        char_filter: html_strip
        filter: "standard, lowercase, fstopar"
        tokenizer: standard
        type: custom
      alangen: 
        char_filter: html_strip
        filter: "standard, lowercase, fstopen, fstemen"
        tokenizer: standard
        type: custom
      alangt1: 
        type: german
      alangt2: 
        type: arabic
      alangten: 
        type: english
      angram: 
        filter: "standard, lowercase, fngram"
        tokenizer: standard
        type: custom
      aporterstem: 
        filter: "standard, lowercase, porterStem"
        tokenizer: standard
        type: custom
      asnowball1: 
        language: German2
        type: snowball
      asnowball2: 
        language: German2
        type: snowball
      asnowballen: 
        language: English
        type: snowball
      aurlemail: 
        filter: "lowercase, fngramurl"
        tokenizer: standard
        type: custom
    filter: 
      fngram: 
        max_gram: 10
        min_gram: 2
        type: nGram
      fngramurl: 
        max_gram: 20
        min_gram: 3
        type: nGram
      fstemde: 
        name: minimal_german
        type: stemmer
      fstemen: 
        name: minimal_english
        type: stemmer
      fstopar: 
        stopwords: _arabic_
        type: stop
      fstopde: 
        stopwords: _german_
        type: stop
      fstopen: 
        stopwords: _english_
        type: stop
    tokenizer: 
      tngram: 
        max_gram: 10
        min_gram: 2
        type: nGram
I read my mappings from a file in order to create my index. The file is defined as following:
{
    "files": {
        "properties": {
            "startDate": {
                "type": "date", 
                "format": "yyyy-MM-dd HH:mm:ss:SSS||yyyy-MM-dd HH:mm:ss", 
                "index": "not_analyzed",
                "store": "yes"
            },
            "fileLang1": { 
                "type": "attachment", 
                "index": "yes", 
                "analyzer": "alang1"
            }
        }
    }
}
When I create my index I'm getting the following exception:
MapperParsingException[mapping [files]]; nested: MapperParsingException[Mapping definition for [fileLang1] has unsupported parameters:  [analyzer : alang1] [index : yes]];
Caused by: MapperParsingException[Mapping definition for [fileLang1] has unsupported parameters:  [analyzer : alang1] [index : yes]]
    at org.elasticsearch.index.mapper.DocumentMapperParser.checkNoRemainingFields(DocumentMapperParser.java:267)
    at org.elasticsearch.index.mapper.DocumentMapperParser.checkNoRemainingFields(DocumentMapperParser.java:261)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:317)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:228)
    at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:137)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:211)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:192)
    at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:368)
    at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:242)
    at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:368)
Any ideas what I'm doing wrong? Thanks in advance.