Metricbeat AWS Module

Hi everybody,

I'm trying to test the Metricbeat with AWS module, but my CloudWatch metrics aren't sent to the ELK stack. Following are my AWS module config:

metricbeat.modules:
- module: aws
  period: 12h
  #credential_profile_name: default
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - billing
    #- cloudwatch
    #- ebs
    #- ec2
    #- rds
    #- usage

- module: aws
  period: 1m
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - elb
    - usage
- module: aws
  period: 5m
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - cloudwatch
#  metrics:
#    - namespace: AWS/EC2
#      #name: ["CPUUtilization", "DiskWriteOps"]
#      tags.resource_type_filter: ec2:instance
#      #dimensions:
#      #  - name: InstanceId
#      #    value: i-0686946e22cf9494a
#      #statistic: ["Average", "Maximum"]
- module: aws
  period: 5m
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - ebs
    - ec2
    - sns
    - sqs
    - rds
- module: aws
  period: 24h
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - s3_daily_storage
    - s3_request

And my user permission policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor4",
            "Effect": "Allow",
            "Action": [
                "tag:GetResources",
                "ec2:DescribeInstances",
                "cloudwatch:GetMetricData",
                "ec2:DescribeRegions",
                "rds:DescribeDBInstances",
                "iam:ListAccountAliases",
                "sns:ListTopics",
                "sts:GetCallerIdentity",
                "cloudwatch:ListMetrics",
                "s3:GetObject",
                "s3:ListBucket",
                "sqs:*",
                "s3:GetBucketLocation"
            ],
            "Resource": "*"
        }
    ]
}

When I try to access the Kibana Metricbeat dashboard, I got the following error:
Could not locate that index-pattern-field (id: aws.usage.metrics.ResourceCount.sum)

Checking the index (metricbeat-*) that field (aws.usage.metrics.ResourceCount.sum) don't exist, neither anyother form AWS CloudWatch Metrics.

Trying to check the logs (/var/log/messages) I only got an error before set the user permissions but no other from AWS module. Anyone can help me?

Thanks

Could you please share the debug logs of Metricbeat (./metricbeat -e -d "*")? Also, have you run ./metricbeat setup?

2020-04-22T17:55:39.527-0300    INFO    instance/beat.go:622    Home path: [/usr/share/metricbeat] Config path: [/etc/metricbeat] Data path: [/var/lib/metricbeat] Logs path: [/var/log/metricbeat]
2020-04-22T17:55:39.528-0300    DEBUG   [beat]  instance/beat.go:674    Beat metadata path: /var/lib/metricbeat/meta.json
2020-04-22T17:55:39.528-0300    INFO    instance/beat.go:630    Beat ID: 51bb2118-269e-400c-bdf1-46bcd92551dd
2020-04-22T17:55:39.531-0300    DEBUG   [docker]        docker/client.go:48     Docker client will negotiate the API version on the first request.
2020-04-22T17:55:39.531-0300    DEBUG   [filters]       add_cloud_metadata/providers.go:126     add_cloud_metadata: starting to fetch metadata, timeout=3s
2020-04-22T17:55:39.532-0300    DEBUG   [add_docker_metadata]   add_docker_metadata/add_docker_metadata.go:88   add_docker_metadata: docker environment not detected: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
2020-04-22T17:55:39.535-0300    DEBUG   [kubernetes]    add_kubernetes_metadata/kubernetes.go:109       add_kubernetes_metadata: could not create kubernetes client using in_cluster config: unable to build kube config due to error: invalid configuration: no configuration has been provided
2020-04-22T17:55:39.537-0300    DEBUG   [filters]       add_cloud_metadata/providers.go:162     add_cloud_metadata: received disposition for gcp after 4.762217ms. result=[provider:gcp, error=failed with http status code 404, metadata={}]
2020-04-22T17:55:39.538-0300    DEBUG   [filters]       add_cloud_metadata/providers.go:162     add_cloud_metadata: received disposition for aws after 5.910071ms. result=[provider:aws, error=<nil>, metadata={"account":{"id":"957749511156"},"availability_zone":"sa-east-1a","image":{"id":"ami-0b5eaa8cab56179ed"},"instance":{"id":"i-0da8d6baf2a85f582"},"machine":{"type":"t2.medium"},"provider":"aws","region":"sa-east-1"}]
2020-04-22T17:55:39.538-0300    DEBUG   [filters]       add_cloud_metadata/providers.go:129     add_cloud_metadata: fetchMetadata ran for 6.090741ms
2020-04-22T17:55:39.538-0300    INFO    add_cloud_metadata/add_cloud_metadata.go:93     add_cloud_metadata: hosting provider type detected as aws, metadata={"account":{"id":"HIDDEN"},"availability_zone":"sa-east-1a","image":{"id":"HIDDEN"},"instance":{"id":"HIDDEN"},"machine":{"type":"t2.medium"},"provider":"aws","region":"sa-east-1"}
2020-04-22T17:55:39.538-0300    DEBUG   [processors]    processors/processor.go:101     Generated new processors: add_host_metadata=[netinfo.enabled=[false], cache.ttl=[5m0s]], add_cloud_metadata={"account":{"id":"HIDDEN"},"availability_zone":"sa-east-1a","image":{"id":"HIDDEN"},"instance":{"id":"HIDDEN"},"machine":{"type":"t2.medium"},"provider":"aws","region":"sa-east-1"}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_kubernetes_metadata, add_fields={"fields":{"tenant":"it2s"}}

And nothing else.

And yes, I ran metricbeat setup

Looking at the aws config, you are not using the cloudwatch metricset so you can comment the whole cloudwatch part there:

- module: aws
  period: 5m
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - cloudwatch

Seem to me metrics are probably sent to Elasticsearch already but something is not right with the dashboard and the index. Do you see aws.usage.metrics.resourceCount.sum in Kibana discover? Also what version of Metricbeat, Elasticsearch and Kibana are you running? I will try to reproduce this problem locally :slightly_smiling_face: Thank you!!

I reduced my aws.yaml to the following:

metricbeat.modules:
- module: aws
  period: 12h
  #credential_profile_name: default
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - billing
    #- cloudwatch
    #- ebs
    #- ec2
    #- rds
    #- usage

- module: aws
  period: 1m
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    #- elb
    - usage

Looking for aws.usage.metrics.resourceCount.sum at Kibana I got the following message:

No results match your search criteria

Checking the metricbeat index, I got no aws* field. That's why I guess:

  • Metricbeat can't retrieve aws info
  • Metricbeat can't send info to ELK stack

I'm using the same user on Filebeat and it's working fine getting the VPC Flow logs from S3 with AWS Filebeat module.

All my stack is running the version 7.6.2.

Thanks!! Could you copy your full metricbeat log here please? Your config looks great, I'd like to check the full log to see if aws module is enabled and etc.

This is the /var/log/messages when I restarted the service:

Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[service]#011service/service.go:53#011Received sigterm/sigint, stopping
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011INFO#011cfgfile/reload.go:201#011Dynamic config reloader stopped
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011INFO#011[reload]#011cfgfile/list.go:118#011Stopping 3 runners ...
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[reload]#011cfgfile/list.go:129#011Stopping runner: system [metricsets=1]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:162#011client: closing acker
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:173#011client: done closing acker
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:176#011client: cancelled 0 events
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:147#011client: wait for acker to finish
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:149#011client: acker shut down
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=uptime, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:148#011Stopped Wrapper[name=system, len(metricSetWrappers)=1]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[reload]#011cfgfile/list.go:131#011Stopped runner: system [metricsets=1]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[reload]#011cfgfile/list.go:129#011Stopping runner: system [metricsets=7]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:162#011client: closing acker
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:173#011client: done closing acker
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:176#011client: cancelled 0 events
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:147#011client: wait for acker to finish
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:149#011client: acker shut down
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=cpu, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[reload]#011cfgfile/list.go:129#011Stopping runner: system [metricsets=2]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:162#011client: closing acker
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:173#011client: done closing acker
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:176#011client: cancelled 0 events
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:147#011client: wait for acker to finish
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[publisher]#011pipeline/client.go:149#011client: acker shut down
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=filesystem, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=process, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=socket_summary, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=process_summary, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=load, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=memory, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=network, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:148#011Stopped Wrapper[name=system, len(metricSetWrappers)=7]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[reload]#011cfgfile/list.go:131#011Stopped runner: system [metricsets=7]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=fsstat, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:148#011Stopped Wrapper[name=system, len(metricSetWrappers)=2]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[reload]#011cfgfile/list.go:131#011Stopped runner: system [metricsets=2]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.095-0300#011INFO#011[monitoring]#011log/log.go:153#011Total non-zero metrics#011{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":210,"time":{"ms":215}},"total":{"ticks":420,"time":{"ms":434},"value":420},"user":{"ticks":210,"time":{"ms":219}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"abb31563-06d9-46f7-9264-52fefa965640","uptime":{"ms":31101}},"memstats":{"gc_next":15844512,"memory_alloc":8696928,"memory_total":59818824,"rss":50475008},"runtime":{"goroutines":16}},"libbeat":{"config":{"module":{"running":0},"reloads":1,"scans":3},"output":{"events":{"acked":36,"batches":2,"total":36},"read":{"bytes":5899},"type":"elasticsearch","write":{"bytes":55779}},"pipeline":{"clients":0,"events":{"active":17,"published":53,"retry":20,"total":53},"queue":{"acked":36}}},"system":{"cpu":{"cores":2},"load":{"1":0.06,"15":0.05,"5":0.07,"norm":{"1":0.03,"15":0.025,"5":0.035}}}}}}
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.095-0300#011INFO#011[monitoring]#011log/log.go:154#011Uptime: 31.103070247s
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.095-0300#011INFO#011[monitoring]#011log/log.go:131#011Stopping metrics logging.
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.096-0300#011INFO#011instance/beat.go:445#011metricbeat stopped.
Apr 29 15:19:11 ip-10-0-0-116 auditbeat: 2020-04-29T15:19:11.657-0300#011WARN#011[process]#011process/process.go:274#011failed to hash executable /usr/share/metricbeat/bin/metricbeat for PID 17686: failed to hash file /usr/share/metricbeat/bin/metricbeat: hasher: file size 126513568 exceeds max file size
Apr 29 15:19:31 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:31.211-0300#011DEBUG#011[cfgfile]#011cfgfile/reload.go:205#011Scan for new config files

After that, it started collecting events.

One thing that seems strange to me is that the /var/log/metricbeat/metricbeat don't show any log since Apr 22.

Hmmm by looking at the log file, only system module is stopped:

Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=filesystem, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=process, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.093-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=socket_summary, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=process_summary, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=load, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=memory, host=]
Apr 29 15:19:11 ip-10-0-0-116 metricbeat: 2020-04-29T15:19:11.094-0300#011DEBUG#011[module]#011module/wrapper.go:207#011Stopped metricSetWrapper[module=system, name=network, host=]

This is what I see in debug log when trying to collect billing and usage:

2020-04-30T14:33:37.982-0600	INFO	instance/beat.go:439	metricbeat start running.
2020-04-30T14:33:37.982-0600	INFO	[monitoring]	log/log.go:118	Starting metrics logging every 30s
2020-04-30T14:33:37.982-0600	DEBUG	[module]	module/wrapper.go:120	Starting Wrapper[name=aws, len(metricSetWrappers)=1]
2020-04-30T14:33:37.982-0600	DEBUG	[module]	module/wrapper.go:120	Starting Wrapper[name=aws, len(metricSetWrappers)=1]
2020-04-30T14:33:37.983-0600	DEBUG	[module]	module/wrapper.go:174	aws/usage will start after 2.252601569s
2020-04-30T14:33:37.983-0600	DEBUG	[module]	module/wrapper.go:174	aws/billing will start after 600.991329ms
2020-04-30T14:33:38.588-0600	DEBUG	[module]	module/wrapper.go:182	Starting metricSetWrapper[module=aws, name=billing, host=]

Here is my metricbeat.yml file:

#==========================  Modules configuration ============================

#metricbeat.config.modules:
  # Glob pattern for configuration loading
  #path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  #reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

logging.level: debug
metricbeat.modules:
- module: aws
  metricsets:
    - usage
  credential_profile_name: elastic-beats
  period: 1m
- module: aws
  metricsets:
    - billing
  credential_profile_name: elastic-beats
  period: 12h

Could you check if metricbeat.config.modules part is comment out in your metricbeat.yml file please?

#metricbeat.config.modules:
  # Glob pattern for configuration loading
  #path: ${path.config}/modules.d/*.yml

It seems my AWS module isn't loading:

May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.042-0300#011DEBUG#011[module]#011module/wrapper.go:120#011Starting Wrapper[name=system, len(metricSetWrappers)=2]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.042-0300#011DEBUG#011[reload]#011cfgfile/list.go:101#011Starting runner: system [metricsets=1]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.043-0300#011DEBUG#011[module]#011module/wrapper.go:120#011Starting Wrapper[name=system, len(metricSetWrappers)=1]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.043-0300#011DEBUG#011[reload]#011cfgfile/list.go:101#011Starting runner: system [metricsets=7]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.043-0300#011DEBUG#011[module]#011module/wrapper.go:120#011Starting Wrapper[name=system, len(metricSetWrappers)=7]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.043-0300#011DEBUG#011[module]#011module/wrapper.go:182#011Starting metricSetWrapper[module=system, name=filesystem, host=]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.044-0300#011DEBUG#011[module]#011module/wrapper.go:182#011Starting metricSetWrapper[module=system, name=fsstat, host=]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.044-0300#011DEBUG#011[module]#011module/wrapper.go:182#011Starting metricSetWrapper[module=system, name=uptime, host=]
May  1 18:58:19 ip-10-0-0-116 metricbeat: 2020-05-01T18:58:19.044-0300#011DEBUG#011[module]#011module/wrapper.go:182#011Starting metricSetWrapper[module=system, name=cpu, host=]

If I comment metricbeat.config.modules in my metricbeat.yml, they don't even load system module. If i uncomment it, I got the log lines above.

I believe it's some problem with my package (I'm using CentOS with metricbeat instaled by Yum). I'll try to reinstall it.

Well, I reinstalled the Metricbeat and started it with the following configuration:

- module: aws
  period: 12h
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    - billing

- module: aws
  period: 1m
  access_key_id: 'KEY_ID'
  secret_access_key: 'KEY_SECRET'
  metricsets:
    #- elb
    - usage

Now the AWS module is loading:

2020-05-01T19:50:45.454-0300#011DEBUG#011[module]#011module/wrapper.go:182#011Starting metricSetWrapper[module=aws, name=billing, host=]
2020-05-01T19:50:47.225-0300#011DEBUG#011[module]#011module/wrapper.go:182#011Starting metricSetWrapper[module=aws, name=usage, host=]

I'll let it running a few days with this configuration to test the info collection and, if everything goes fine, I'll enable more metricsets.

Thanks

Well, event after the module being reloaded, the info aren't collected.

Could not locate that index-pattern-field (id: aws.billing.metrics.EstimatedCharges.max)

The same error persist.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.