Long and float fields showing up as text fields in Kibana

Running Kibana version 5.5.2.
My current setup is Logstash is taking the logs from Docker containers, runs grok filters before sending the logs to elasticsearch. The specific logs that I need to show up as long, float are two times from AWS calls to ECS and EC2 and currently a grok filter pulls them out. Here is the custom filter that pulls out the ECS timings: ECS_DESCRIBE_CONTAINER_INSTANCES (AWS)(%{SPACE})(ecs)(%{SPACE})(%{POSINT})(%{SPACE})(?<ECS_DURATION>(%{NUMBER}))(s)(%{SPACE})(?<ECS_RETRIES>(%{NONNEGINT}))(%{SPACE})(retries) so I need ECS_DURATION to be a float and ECS_RETRIES to be a long. In the docker log handler I have the following

    if [ECS_DURATION] {
      mutate {
        convert => ["ECS_DURATION", "float"]
      }
    }

    if [ECS_RETRIES] {
      mutate {
        convert => ["ECS_RETRIES", "integer"]
      }
    }

When I look at the field in Kibana, it still shows as a text field, but when I make the following request to elasticsearch for the mappings, it shows those fields as long and float.

GET /logstash-2020.12.18/_mapping
{
  "logstash-2020.12.18": {
    "mappings": {
      "log": {
        "_all": {
          "enabled": true,
          "norms": false
        },
        "dynamic_templates": [
          {
            "message_field": {
              "path_match": "message",
              "match_mapping_type": "string",
              "mapping": {
                "norms": false,
                "type": "text"
              }
            }
          },
          {
            "string_fields": {
              "match": "*",
              "match_mapping_type": "string",
              "mapping": {
                "fields": {
                  "keyword": {
                    "ignore_above": 256,
                    "type": "keyword"
                  }
                },
                "norms": false,
                "type": "text"
              }
            }
          }
        ],
        "properties": {
          "@timestamp": {
            "type": "date",
            "include_in_all": false
          },
          "@version": {
            "type": "keyword",
            "include_in_all": false
          },
          "EC2_DURATION": {
            "type": "float"
          },
          "EC2_RETRIES": {
            "type": "long"
          },
          "ECS_DURATION": {
            "type": "float"
          },
          "ECS_RETRIES": {
            "type": "long"
          },

I even created a custom mapping template in elasticsearch with the following call

PUT /_template/aws_durations?pretty
{
  "template": "logstash*",
  "mappings": {
    "type1": {
      "_source": {
        "enabled": true
      },
      "properties": {
        "ECS_DURATION": {
          "type": "half_float"
        },
        "ECS_RETRIES": {
          "type": "byte"
        },
        "EC2_DURATION": {
          "type": "half_float"
        },
        "EC2_RETRIES": {
          "type": "byte"
        }
      }
    }
  }
}