Cloudtrail codec - failed to parse [requestParameters.buckeyPolicy.Statement.Principal]

Hello
I have problems with parse in some situation, where I have error:
"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [requestParameters.buckeyPolicy.Statement.Principal]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"unknown property [AWS]"

Example error from logstash log in debug mode is here.

Regards,
Kris

What version are you on? What does your config look like?

Ups sorry, I forgot about this.
logstash 2.4.0
logstash-codec-cloudtrail 2.0.4 (last version)

Config:

input {
s3 {
bucket => "XXX"
access_key_id => "XXX"
secret_access_key => "XXX"
prefix => "AWSLogs/"
codec => cloudtrail
}

output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "cloudtrail-%{+YYYY.MM.dd}"
ssl => 'false'
template => '/etc/logstash/elasticsearch-template.json'
template_name => 'custom_not_analyzed_cloudtrail'
template_overwrite => true
}
}

Where elasticsearch-template.json:
> {
> "template" : "",
> "settings" : {
> "index.refresh_interval" : "5s",
> },
> "mappings" : {
> "default" : {
> "_all" : {"enabled" : true, "omit_norms" : true},
> "dynamic_templates" : [ {
> "string_fields" : {
> "match" : "
",
> "match_mapping_type" : "string",
> "mapping" : {
> "type" : "string", "index" : "not_analyzed", "omit_norms" : true
> }
> }
> }, {
> "float_fields" : {
> "match" : "",
> "match_mapping_type" : "float",
> "mapping" : { "type" : "float", "doc_values" : true }
> }
> }, {
> "double_fields" : {
> "match" : "
",
> "match_mapping_type" : "double",
> "mapping" : { "type" : "double", "doc_values" : true }
> }
> }, {
> "byte_fields" : {
> "match" : "",
> "match_mapping_type" : "byte",
> "mapping" : { "type" : "byte", "doc_values" : true }
> }
> }, {
> "short_fields" : {
> "match" : "
",
> "match_mapping_type" : "short",
> "mapping" : { "type" : "short", "doc_values" : true }
> }
> }, {
> "integer_fields" : {
> "match" : "",
> "match_mapping_type" : "integer",
> "mapping" : { "type" : "integer", "doc_values" : true }
> }
> }, {
> "long_fields" : {
> "match" : "
",
> "match_mapping_type" : "long",
> "mapping" : { "type" : "long", "doc_values" : true }
> }
> }, {
> "date_fields" : {
> "match" : "",
> "match_mapping_type" : "date",
> "mapping" : { "type" : "date", "doc_values" : true }
> }
> "geo_point_fields" : {
> "match" : "
",
> "match_mapping_type" : "geo_point",
> "mapping" : { "type" : "geo_point", "doc_values" : true }
> }
> } ],
> "properties" : {
> "@timestamp": { "type": "date", "doc_values" : true },
> "@version": { "type": "string", "index": "not_analyzed", "doc_values" : true },
> "apiVersion": { "type": "string", "index": "not_analyzed", "doc_values" : true },
> "geoip" : {
> "type" : "object",
> "dynamic": true,
> "properties" : {
> "ip": { "type": "ip", "doc_values" : true },
> "location" : { "type" : "geo_point", "doc_values" : true },
> "latitude" : { "type" : "float", "doc_values" : true },
> "longitude" : { "type" : "float", "doc_values" : true }
> }
> }
> }
> }
> }
> }

Problem is, because sometimes requestParameters.buckeyPolicy.Statement.Principal is a string (i.e. "*"), and sometimes as a object (like in above example).
I'm trying to do something with this, even remove Prinicpal field, but without effect. My filter config (rest the same like above):

if [requestParameters] and [requestParameters][buckeyPolicy] and [requestParameters][buckeyPolicy][Statement] and [requestParameters][buckeyPolicy][Statement][Principal] {
mutate { remove_field => [ "[requestParameters][buckeyPolicy][Statement][Principal]" ] }
}

I wanted to check, when Prinicpal have value "*", but also not working:

if [requestParameters] and [requestParameters][buckeyPolicy] and [requestParameters][buckeyPolicy][Statement] and [requestParameters][buckeyPolicy][Statement][Principal] and [requestParameters][buckeyPolicy][Statement][Principal] != "*"

This is proper or not?