Unexpectedly High Hourly Usage of CW:GMD-Metrics with AWS Module

Hello,

I'm trying to make sense of the CW:GMD-Metrics usage from my usage report. It currently is indicating it is using 10k metrics every hour, but I have no idea where that number is coming from. Looks farther back in the report, there are some points it was using 21k metrics.

These seem like what I would expect if it was pulling all metrics in the namespaces I'm working with.

Service Operation UsageType Resource StartTime EndTime UsageValue
AmazonCloudWatch GetMetricData CW:GMD-Metrics 3/1/2020 10:00 3/1/2020 11:00 10560
AmazonCloudWatch GetMetricData CW:GMD-Requests 3/1/2020 10:00 3/1/2020 11:00 240

For context, there are 13 APIs and 6 DynamoDB tables data is being pulled from. If I understand how GMD is charged correctly, this should only come out to 2280 metrics/hr. I'm using at least 5x that.

Config:

 metricbeat.modules:       
    - module: aws
      period: 60s
      access_key_id: '${AWS_ACCESS_KEY_ID_DATAPRE}'
      secret_access_key: '${AWS_SECRET_ACCESS_KEY_DATAPRE}'
      tags: ["pre"]
      metricsets:
        - cloudwatch
      metrics:
        - namespace: AWS/ApiGateway          
          name: ["Latency", "IntegrationLatency"]
          statistic: ["Average", "Sum"]     
      region:
        - us-east-1  
    - module: aws
      period: 60s
      access_key_id: '${AWS_ACCESS_KEY_ID_DATAPRE}'
      secret_access_key: '${AWS_SECRET_ACCESS_KEY_DATAPRE}'      
      metricsets:
        - cloudwatch
      metrics:        
        - namespace: AWS/DynamoDB
          name: ["ConsumedReadCapacityUnits", "ConsumedWriteCapacityUnits"]       
          tags.resource_type_filter: dynamodb:table
          statistic: ["Average", "Sum"]  
          tags:
          - key: "Tenant"
            value: "tenant1"

Update:

So I modified my config to bring in more data and my usage seems to have gone down?

- module: aws
  period: 60s
  access_key_id: '${AWS_ACCESS_KEY_ID_DATAPRE}'
  secret_access_key: '${AWS_SECRET_ACCESS_KEY_DATAPRE}'
  tags: ["pre"]
  metricsets:
    - cloudwatch
  metrics:
    - namespace: AWS/ApiGateway
    - namespace: AWS/DynamoDB
 region: 
    - us-east-1

The 10,000 metrics/hr part is from the config in my original post. 21k was before I took out SQS for testing reasons.

Hmm this doesn't make sense to me either. I will look into the cloudwatch metricset to see why two different config makes this big difference on GMD-Metrics. Thanks for all the info!!! Very helpful!!

I do see you are using region: us-east-1 in the config. Are you planning to only collect metrics from us-east-1 region? If so, the config parameter is regions.

That would explain why it was still running list-metrics on other regions. Thanks for the tip. Let me know if you turn up anything on the cloudwatch metricset.

I'm going to be running a get-metric-data cronjob for comparison against the api gateway and dynamo metrics I was pulling before. I'll let you know how it performs metric wise in comparison.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.