Setting date field as doc_value and not fielddata is overriden

I asked this question on SO as well, as I didn't know about this forum.

I'm new to ElasticSearch, started working with ElasticSearch 1.7.3 as part of a Logstash-ElasticSearch-Kibana deployment.

I've defined a mapping template for my log messages, this is the interesting part:

{   
  "template" : "logstash-*",
  "settings" : { "index.refresh_interval" : "5s" },
  "mappings" : {
    "_default_" : {
      "_all" : {"enabled" : true, "omit_norms" : true},
      "dynamic_templates" : [ {
        "date_fields" : {
          "match" : "*",
          "match_mapping_type" : "date",
          "mapping" : { "type" : "date", "doc_values" : true }
        }
      }],
      "properties" : {
        "@version" : { "type" : "string", "index" : "not_analyzed" },
        "@timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
        "message" : { "type" : "string" }
      }
    } , 
    "my_log" : {
      "_all" : { "enabled" : true, "omit_norms" : true },
      "dynamic_templates" : [ {
        "date_fields" : {
          "match" : "*",
          "match_mapping_type" : "date",
          "mapping" : { "type" : "date", "doc_values" : true }
        }
      }],
      "properties" : {
        "@timestamp" : { "type" : "date", "format" : "dateOptionalTime" },
        "file" : { "type" : "string" },
        "message" : { "type" : "string" }
        "geolocation" : { "type" : "string" },
      }
    }
  }
}

Although the @timestamp field is defined as doc_value:true I have an error of MemoryException because it is a fielddata:
[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [633785548/604.4 mb]

I dont understand why my @timestamp is treated as FIELDDATA when I defined all time "date fields" to be doc_values?

I know I can change the memory or add more nodes to the cluster, but in my point of view this is a design problem where this field should not be indexed in memory.