How to parse sub records of cloudtrail logs

Hello there

I'm new to ELK and need your help for configuring ELK. I have below logstash config for parsing cloudtrail logs and this breaks/splits the 'Records' field json but i still see some sub records like - Records.responseElements.instancesSet.items and few others appearing as json and not broken into individual records. Could you please help me configure proper filter for splitting subrecords too.

Below is my current config.

Appreciate any help!

===========================
input {

s3 {
access_key_id => "AKID"
secret_access_key => "SAK"
bucket => "xxxxxxxxxxxxxxx"
type => "cloudtrail"
delete => "false"
tags => "ctl"
prefix => "AWSLogs/xxxxxxxxxxxxx/CloudTrail/"
codec => "json"
}

}

filter {

if [type] == "cloudtrail" {
json {
source => "message"
}

split {
field => "Records"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]

}
stdout { codec => rubydebug }
}

Without seeing an example document it's hard to know, but I suspect that everything is fine from a Logstash point of view. Kibana does a poor job of showing arrays of objects.

Hi Magnus, Thank you for the response.

The current config does break 'Records' field but the issue here is when you run a command or anything that includes more than 1 record, it will literally just dump all the data into a single Records event, overwrite a bunch of them, and lose data.

You can verify this by going to AWS and creating a server, and watching the logstash logs. Every so often it will say something along the lines of too much data in a single event.

Would like to know how can we overcome this?

Any specific filter configuration that can be used to resolve this?
Appreciate any help!

Hi Magnus and team

Would you please be able to comment on the solution for above mentioned issue .

Thanks!

Any help on this guys?

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.