I am quite new using Elastic and I have log data from AWS CloudWatch that I have shipped into Elastic Cluster using filebeats however I couldn't understand how I can map my logs into ECS fields. I read a lot of documentation but neither of them shows actually do this. I am using ECS logger however all my logs are sent in messages and I can't aggregate on it. Main question is how can log my messages in a certain format so I can. aggregate and visualize on Kibana?
I watched the webinar above and I didn't understand how he decides to map his data to particular ecs fields. And after that how can we make sure that our log will be mapped to those fields.
I am trying to create a mapping for example,
event.name = postprocessor
event.source = cirus
event.type = success
and etc...
I couldn't understand how to work with ECS logger so that I can visualize this data in Kibana. I would really appreciate the help.
I read all of those documents, however I couldn't find the part where they explain how to adhere those formats.
Here is my log from ECS-logger that is displayed as only under message field on Kibana
2021-06-07T13:18:47.110Z 2021-06-07 13:18.47 [info ] {'labels': {'scan.category': 'APP_SECURTY_CATEGORY', 'scan.type': 'VULNCHECK', 'scan.outcome': 'SUCCESSFULL', 'scan.reason': 'NO_ERROR'}, 'message': 'Mertay Message', 'event': {}}
Could you help me identify what I am doing wrong? Because I am sure this problem that I am having is very fundamental, I just don't have enough experience and couldn't find any guidance on how do I create custom fields and log them properly. I have read the article about how to create custom fields, and using their scheme but it doesn't help either. I really appreciate the help! Thank you!
from your screenshot it looks like your logged data are serialized into aws.cloudwatch.message as JSON on their way through cloudwatch. To deserialize the data you can use the decode_json_fields processor in your aws module configuration such that it contains something like the following:
- module: aws
# ... other aws filesets ...
cloudwatch:
enabled: true
# ... other cloudwatch config ...
input:
processors:
- decode_json_fields:
fields: ["aws.cloudwatch.message"] # the field to parse
target: "" # write to root of document
overwrite_keys: true # might be needed to override @timestamp and similar fields
I couldn't test this exact setup, but maybe it unblocks you in figuring it out.
And another thought: Since you're adding your custom fields (such as scan) I'd recommend to enhance the index template to include mappings for these. Otherwise the dynamic mapping might cause the scan.* fields to be interpreted in a way that hinders querying.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.