Logstash mutate filter, rename - "Exception in filterworker", "exception"=>#<IndexError: string not matched>

Ok, that makes sense. I have posted details about what I'm trying to accomplish here. A simplified version of that would be:

Take CSV files with this kind of structure:

file1:

id,age,gender,wave
1,49,M,1
2,72,F,0

file2:

id,time,event1
1,4/20/2095,V39
1,4/21/2095,T21
2,5/17/2094,V39

file3:

id,time,event2
1,4/22/2095,P90
2,5/18/2094,E2

and create an Elasticsearch index where "id" is the root/parent with each "file#" nested under a given "id". Obviously there should only be one "id" object in the outputted JSON even though there are multiple files for each "id" and often multiple rows per "id" in a given file. When manually entering this I use a mapping to assign the nested "file#" fields and set the property fields to "not analyzed". For completeness this resulting index structure is what I desire:

GET /forumlogst/subject/_search
{
   "took": 1,
   "timed_out": false,
   "_shards": {
      "total": 1,
      "successful": 1,
      "failed": 0
   },
   "hits": {
      "total": 2,
      "max_score": 1,
      "hits": [
         {
            "_index": "forumlogst",
            "_type": "subject",
            "_id": "1",
            "_score": 1,
            "_source": {
               "id": "1",
               "file1": [
                  {
                     "age": "49",
                     "gender": "M",
                     "wave": "1"
                  }
               ],
               "file2": [
                  {
                     "time": "04/20/2095",
                     "event1": "V39"
                  },
                  {
                     "time": "04/21/2095",
                     "event1": "T21"
                  }
               ],
               "file3": [
                  {
                     "time": "04/22/2095",
                     "event2": "P90"
                  }
               ]
            }
         },
         {
            "_index": "forumlogst",
            "_type": "subject",
            "_id": "2",
            "_score": 1,
            "_source": {
               "id": "2",
               "file1": [
                  {
                     "age": "72",
                     "gender": "F",
                     "wave": "0"
                  }
               ],
               "file2": [
                  {
                     "time": "05/17/2094",
                     "event1": "V39"
                  }
               ],
               "file3": [
                  {
                     "time": "04/22/2095",
                     "event2": "E2"
                  }
               ]
            }
         }
      ]
   }
}

I'm hoping Logstash can help automate the creation of this kind of index as I will have thousands of "id"s and hundreds of event types. Each "id" may also have a couple hundred events of a particular type (Ex: "id" 3 may have 250 event2 rows in their CSV). I realize this will create a very large JSON file/index. I'm choosing this Nested approach over Parent/Child structures primarily because we desire support in Kibana for aggregation across nested fields which is planned for v4.4. I welcome any advice though.