Aws cloudwatch metricset is sampling data -- can it import data exactly?

Hello,

I would like to ship all my AWS CloudWatch metrics from one of my AWS accounts to Elasticsearch. I have installed Metricbeat 7.4.2 and configured the aws module’s “cloudwatch” metricset like so:

- module: aws
  period: 60s
  metricsets:
    - cloudwatch
  metrics:
    - namespace: AWS/DynamoDB
    - namespace: AWS/Events
    - namespace: AWS/Lambda
    - namespace: AWS/Usage
  regions:
    - eu-west-2

I can see documents arriving in Elasticsearch, but it looks as if Metricbeat is sampling the metrics rather than importing the values verbatim.

For example, one CloudWatch time series contains a datapoint every 120 seconds. Here are some of its datapoints via a CloudWatch GetMetricData call:

"MetricDataResults": [
        {
            "Id": "id_1",
            "Label": "Duration",
            "Timestamps": [
                "2019-11-28T13:18:00Z",
                "2019-11-28T13:16:00Z",
                "2019-11-28T13:14:00Z"
            ],
            "Values": [
                18282.56,
                16626.54,
                14270.99
            ],
            "StatusCode": "Complete"
        }
    ]

But I see the following in Elasticsearch:

Time (i.e. @timestamp)         aws.metrics.Duration.max                
Nov 28, 2019 @ 13:20:21.748    18,282.56
Nov 28, 2019 @ 13:19:21.748    18,282.56
Nov 28, 2019 @ 13:18:21.748    16,626.54
Nov 28, 2019 @ 13:17:21.748    16,626.54
Nov 28, 2019 @ 13:16:21.748    14,270.99
Nov 28, 2019 @ 13:15:21.748    14,270.99

Notice the times do not match up exactly, and there are duplicate events (one duplicate for each, because the metricset period is 60s)

Is there any way to configure Metricbeat (or another Elastic product, like another Beat or Logstash) to import the CloudWatch metrics exactly?

Thanks for reading,
Paul

Hello @cmr.paul.wilkinson, thanks for posting your question here! I think the main issue here is the period. Based on the MetricDataResults you showed, these metrics are reported every 2 minutes to Cloudwatch. But for cloudwatch metricset using Metricbeat, the period is set to 1-minute and that causes the duplicate events.

I see that you are trying to pull data from several different namespaces, do you want to collect them with the same period? If not, you can set different periods for different namespaces. For example:

- module: aws
  period: 60s
  metricsets:
    - cloudwatch
  metrics:
    - namespace: AWS/DynamoDB
  regions:
    - eu-west-2
- module: aws
  period: 120s
  metricsets:
    - cloudwatch
  metrics:
    - namespace: AWS/Events
    - namespace: AWS/Lambda
  regions:
    - eu-west-2
- module: aws
  period: 5m
  metricsets:
    - cloudwatch
  metrics:
    - namespace: AWS/Usage
  regions:
    - eu-west-2

You can check cloudwatch metrics on AWS portal (documentation as well) to see what's the collection period should be.

For timestamp, unfortunately it will not be a complete match right now. We are using the time we collect this event as the timestamp instead of from cloudwatch timestamp. This is a great enhancement issue though. I will create a ticket for that! Thanks!

1 Like

Thank you for the quick response, @Kaiyan_Sheng !

I see that you are trying to pull data from several different namespaces, do you want to collect them with the same period? If not, you can set different periods for different namespaces. For example:

Yes you are right, my example was quite contrived and in reality it would be better to adjust the periods to match each service's period, as you said.

The problem I have in reality is that the AWS/Lambda datapoints are aperiodic (sometimes it is every 60 seconds, but sometimes there are gaps when no Lambda functions have been invoked in the last minute) and it would be nice to see the data in Elasticsearch in the same way it appears in CloudWatch.

For timestamp, unfortunately it will not be a complete match right now. We are using the time we collect this event as the timestamp instead of from cloudwatch timestamp. This is a great enhancement issue though. I will create a ticket for that! Thanks!

If you are able to, please could you link the ticket when you create it? Thank you!