Metricbeat AWS module not sending data

Good day,

I'm currently trying to collect AWS metrics with the Metricbeat AWS module. And although the status of metricbeat looks fine, elasticsearch doesn't seem to receive any data.

my metricbeat.yml:

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~


metricbeat.modules:
- module: aws
  period: 300s
  metricsets:
    - ec2
  access_key_id: '<access-key-id>'
  secret_access_key: '<secret-access-key>'

- module: aws
  period: 300s
  metricsets:
    - cloudwatch
  metrics:
    - namespace: AWS/EC2
      resource_type: ec2:instance
  credential_profile_name: <my-profile-name>
  access_key_id: '<access-key-id>'
  secret_access_key: '<secret-access-key>'

- module: aws
  period: 24h
  metricsets:
    - billing
  access_key_id: '<access-key-id>'
  secret_access_key: '<secret-access-key>'
  cost_explorer_config:
    group_by_dimension_keys:
      - "AZ"
      - "INSTANCE_TYPE"
      - "SERVICE"
    group_by_tag_keys:
      - "aws:createdBy"

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: elastic
  password: <my-elastic-pw>
  ssl.certificate_authorities: ["<path-to-ca.crt>"]

setup.kibana:
    host: "http://localhost:5601"

The output from sudo metricbeat setup -e d "*":

2021-01-05T12:48:24.063+0100    INFO    instance/beat.go:645    Home path: [/usr/share/metricbeat] Config path: [/etc/metricbeat] Data path: [/var/lib/metricbeat] Logs path: [/var/log/metricbeat]
2021-01-05T12:48:24.063+0100    INFO    instance/beat.go:653    Beat ID: 4d5b765b-26cb-4050-bed8-416aa5f320e2
2021-01-05T12:48:24.064+0100    INFO    [beat]  instance/beat.go:981    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/metricbeat", "data": "/var/lib/metricbeat", "home": "/usr/share/metricbeat", "logs": "/var/log/metricbeat"}, "type": "metricbeat", "uuid": "4d5b765b-26cb-4050-bed8-416aa5f320e2"}}}
2021-01-05T12:48:24.064+0100    INFO    [beat]  instance/beat.go:990    Build info      {"system_info": {"build": {"commit": "1428d58cf2ed945441fb2ed03961cafa9e4ad3eb", "libbeat": "7.10.0", "time": "2020-11-09T20:08:47.000Z", "version": "7.10.0"}}}
2021-01-05T12:48:24.064+0100    INFO    [beat]  instance/beat.go:993    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.14.7"}}}
2021-01-05T12:48:24.065+0100    INFO    [beat]  instance/beat.go:997    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-11-23T13:52:21+01:00","containerized":false,"name":"<host>","ip":["<ip>","::<more ip>","<ip>","<IPv6>"],"kernel_version":"5.4.0-1029-aws","mac":["<mac>"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.04.5 LTS (Bionic Beaver)","major":18,"minor":4,"patch":5,"codename":"bionic"},"timezone":"CET","timezone_offset_sec":3600,"id":"ec2cdb77e116a0d46764c463497dcc2f"}}}
2021-01-05T12:48:24.065+0100    INFO    [beat]  instance/beat.go:1026   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/etc/metricbeat", "exe": "/usr/share/metricbeat/bin/metricbeat", "name": "metricbeat", "pid": 12929, "ppid": 12928, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2021-01-05T12:48:23.160+0100"}}}
2021-01-05T12:48:24.066+0100    INFO    instance/beat.go:299    Setup Beat: metricbeat; Version: 7.10.0
2021-01-05T12:48:24.066+0100    INFO    [index-management]      idxmgmt/std.go:184      Set output.elasticsearch.index to 'metricbeat-7.10.0' as ILM is enabled.
2021-01-05T12:48:24.066+0100    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:93     add_cloud_metadata: hosting provider type detected as aws, metadata={"account":{"id":"<account>"},"availability_zone":"eu-central-1a","image":{"id":"ami-<ami>"},"instance":{"id":"i-<instance>"},"machine":{"type":<type>"},"provider":"aws","region":"<region>"}
2021-01-05T12:48:24.066+0100    INFO    eslegclient/connection.go:99    elasticsearch url: https://localhost:9200
2021-01-05T12:48:24.067+0100    INFO    [publisher]     pipeline/module.go:113  Beat name: <....>
2021-01-05T12:48:24.081+0100    INFO    eslegclient/connection.go:99    elasticsearch url: https://localhost:9200
2021-01-05T12:48:24.170+0100    INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.10.0
Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.

2021-01-05T12:48:24.301+0100    INFO    [index-management]      idxmgmt/std.go:261      Auto ILM enable success.
2021-01-05T12:48:24.309+0100    INFO    [index-management.ilm]  ilm/std.go:139  do not generate ilm policy: exists=true, overwrite=false
2021-01-05T12:48:24.309+0100    INFO    [index-management]      idxmgmt/std.go:274      ILM policy successfully loaded.
2021-01-05T12:48:24.309+0100    INFO    [index-management]      idxmgmt/std.go:407      Set setup.template.name to '{metricbeat-7.10.0 {now/d}-000001}' as ILM is enabled.
2021-01-05T12:48:24.309+0100    INFO    [index-management]      idxmgmt/std.go:412      Set setup.template.pattern to 'metricbeat-7.10.0-*' as ILM is enabled.
2021-01-05T12:48:24.309+0100    INFO    [index-management]      idxmgmt/std.go:446      Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.10.0 {now/d}-000001} as ILM is enabled.
2021-01-05T12:48:24.309+0100    INFO    [index-management]      idxmgmt/std.go:450      Set settings.index.lifecycle.name in template to {metricbeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2021-01-05T12:48:24.319+0100    INFO    template/load.go:183    Existing template will be overwritten, as overwrite is enabled.
2021-01-05T12:48:24.600+0100    INFO    template/load.go:117    Try loading template metricbeat-7.10.0 to Elasticsearch
2021-01-05T12:48:25.264+0100    INFO    template/load.go:109    template with name 'metricbeat-7.10.0' loaded.
2021-01-05T12:48:25.264+0100    INFO    [index-management]      idxmgmt/std.go:298      Loaded index template.
2021-01-05T12:48:25.272+0100    INFO    [index-management]      idxmgmt/std.go:309      Write alias successfully generated.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2021-01-05T12:48:25.273+0100    INFO    kibana/client.go:119    Kibana url: http://localhost:5601
2021-01-05T12:48:25.708+0100    INFO    kibana/client.go:119    Kibana url: http://localhost:5601
2021-01-05T12:50:06.628+0100    INFO    instance/beat.go:815    Kibana dashboards successfully loaded.
Loaded dashboards

The output from sudo service metricbeat status:

● metricbeat.service - Metricbeat is a lightweight shipper for metrics.
   Loaded: loaded (/lib/systemd/system/metricbeat.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2021-01-05 12:50:37 CET; 3s ago
     Docs: https://www.elastic.co/products/beats/metricbeat
 Main PID: 13015 (metricbeat)
    Tasks: 8 (limit: 4915)
   CGroup: /system.slice/metricbeat.service
           └─13015 /usr/share/metricbeat/bin/metricbeat --environment systemd -c /etc/metricbeat/metricbeat.yml --path.home /usr/share/metricbeat --path.config /etc/metricbeat --path.data /var/lib/metricbeat --path.logs /var/log/metricbeat

Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.177+0100        INFO        [beat]        instance/beat.go:997        Host info        {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-11-23T13:52:21+01:00","containerized":false,"n
Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.178+0100        INFO        [beat]        instance/beat.go:1026        Process info        {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read
Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.178+0100        INFO        instance/beat.go:299        Setup Beat: metricbeat; Version: 7.10.0
Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.178+0100        INFO        [index-management]        idxmgmt/std.go:184        Set output.elasticsearch.index to 'metricbeat-7.10.0' as ILM is enabled.
Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.179+0100        INFO        [add_cloud_metadata]        add_cloud_metadata/add_cloud_metadata.go:93        add_cloud_metadata: hosting provider type detected as aws, metadata={"account":{"id":"<id>
Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.179+0100        INFO        eslegclient/connection.go:99        elasticsearch url: https://localhost:9200
Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.179+0100        INFO        [publisher]        pipeline/module.go:113        Beat name: <....>
Jan 05 12:50:37 <host> metricbeat[13015]: 2021-01-05T12:50:37.198+0100        WARN        [aws.ec2]        aws/aws.go:99        extra charges on AWS API requests will be generated by this metricset
Jan 05 12:50:38 <host> metricbeat[13015]: 2021-01-05T12:50:38.414+0100        WARN        [aws.cloudwatch]        aws/aws.go:99        extra charges on AWS API requests will be generated by this metricset
Jan 05 12:50:39 <host> metricbeat[13015]: 2021-01-05T12:50:39.572+0100        WARN        [aws.billing]        aws/aws.go:99        extra charges on AWS API requests will be generated by this metricset

But Kibana tells me that: No data has been received from this module yet

And there is no matching index created.

I running out of ideas here and would appreciate any help.

Edit:

It seems sudo metricbeat -e -d "*" was the command I was looking for. It tells me:

2021-01-05T13:59:51.818+0100    INFO    instance/beat.go:645    Home path: [/usr/share/metricbeat] Config path: [/etc/metricbeat] Data path: [/var/lib/metricbeat] Logs path: [/var/log/metricbeat]
2021-01-05T13:59:51.818+0100    DEBUG   [beat]  instance/beat.go:697    Beat metadata path: /var/lib/metricbeat/meta.json
2021-01-05T13:59:51.818+0100    INFO    instance/beat.go:653    Beat ID: 4d5b765b-26cb-4050-bed8-416aa5f320e2
2021-01-05T13:59:51.818+0100    DEBUG   [docker]        docker/client.go:48     Docker client will negotiate the API version on the first request.
2021-01-05T13:59:51.819+0100    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:126     add_cloud_metadata: starting to fetch metadata, timeout=3s
2021-01-05T13:59:51.819+0100    DEBUG   [add_docker_metadata]   add_docker_metadata/add_docker_metadata.go:87   add_docker_metadata: docker environment not detected: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
2021-01-05T13:59:51.821+0100    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:162     add_cloud_metadata: received disposition for gcp after 1.915709ms. result=[provider:gcp, error=failed with http status code 404, metadata={}]
2021-01-05T13:59:51.821+0100    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:162     add_cloud_metadata: received disposition for digitalocean after 2.281275ms. result=[provider:digitalocean, error=failed with http status code 404, metadata={}]
2021-01-05T13:59:51.822+0100    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:162     add_cloud_metadata: received disposition for aws after 2.675994ms. result=[provider:aws, error=<nil>, metadata={"account":{"id":"<id>"},"availability_zone":"eu-central-1a","image":{"id":"ami-<id>"},"instance":{"id":"i-<id>"},"machine":{"type":"<type>"},"provider":"aws","region":"eu-central-1"}]
2021-01-05T13:59:51.822+0100    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:129     add_cloud_metadata: fetchMetadata ran for 2.812604ms
2021-01-05T13:59:51.822+0100    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:93     add_cloud_metadata: hosting provider type detected as aws, metadata={"account":{"id":"<id>"},"availability_zone":"eu-central-1a","image":{"id":"ami-<id>"},"instance":{"id":"i-<id>"},"machine":{"type":"<type>"},"provider":"aws","region":"eu-central-1"}
2021-01-05T13:59:51.822+0100    DEBUG   [processors]    processors/processor.go:120     Generated new processors: add_cloud_metadata={"account":{"id":"<id>"},"availability_zone":"eu-central-1a","image":{"id":"ami-<id>"},"instance":{"id":"i-<id>"},"machine":{"type":"<type>"},"provider":"aws","region":"eu-central-1"}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]]
2021-01-05T13:59:51.822+0100    INFO    instance/beat.go:392    metricbeat stopped.
2021-01-05T13:59:51.822+0100    ERROR   instance/beat.go:956    Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).

Filebeat is installed but stopped. Just to be sure I added the following to my metricbeat.yml:

path.home: /usr/share/metricbeat
path.config: /etc/metricbeat
path.data: /var/lib/metricbeat
path.logs: /var/log/

But i still get the same error:

Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).```

Hey!

This means that there is already a Metricbeat process running. I guess it is the systemd service one. You will need to stop Metricbeat service so as to run it manually (sudo service metricbeat stop)

C.

@ChrsMark thank you for your support.

Although i have stopped and restarted metricbat several times, the error remains the same.

The only other *beat on this machine should be Filebeat, which is currently stopped.

So when I stop Metricbeat manualy, I would expect that no other process is claming the "data path".

But when i start metricbeat with sudo service metricbeat start and look at metricbeat -e -d "*" I get:

2021-01-07T14:13:38.072+0100    ERROR   instance/beat.go:956    Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).

Although both beats have there own data path defined:

filebeat.yml:

filebeat.inputs:
- type: log

  paths:
    - "<path-to->catalina.out"

- type: log
  paths:
    - "<path-to->access.log"
    - "<path-to->error.log"

  multiline.type: pattern
  multiline.pattern: '^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:'
  multiline.negate: false
  multiline.match: after

- type: filestream
  paths:
    - /var/log/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

setup.kibana:
  host: "localhost:5601"

output.logstash:
  hosts: ["localhost:5144"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~


path.home: /usr/share/filebeat
path.config: /etc/filebeat
path.data: /var/lib/filebeat
path.logs: /var/log/

metricbeat.yml:

path.home: /usr/share/metricbeat
path.config: /etc/metricbeat
path.data: /var/lib/metricbeat
path.logs: /var/log/

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~


metricbeat.modules:
- module: aws
  period: 300s
  metricsets:
    - ec2
  access_key_id: '<pw>'
  secret_access_key: '<pw>'

- module: aws
  period: 300s
  metricsets:
    - cloudwatch
  metrics:
    - namespace: AWS/EC2
      resource_type: ec2:instance
  credential_profile_name: <profile-name>
  access_key_id: '<pw>'
  secret_access_key: '<pw>'

- module: aws
  period: 24h
  metricsets:
    - billing
  access_key_id: '<pw>'
  secret_access_key: '<pw>'
  cost_explorer_config:
    group_by_dimension_keys:
      - "AZ"
      - "INSTANCE_TYPE"
      - "SERVICE"
    group_by_tag_keys:
      - "aws:createdBy"

output.elasticsearch:
  hosts: ["https://localhost:9200"]

The only solution that I can think of at the moment is to deinstall Filebeat, but there must be better solution to the problem.

When you run sudo service metricbeat start is starts a Metricbeat process. When you run ./metricbeat -e -d "*" then you start another Metricbeat process, you don't just follow the logs of the first Metricbeat. Is this what you might confused you?

That makes sense. It eluded me that ./metricbeat -e -d "*" (of course) is starting a new Metricbeat process.

So ./metricbeat -e -d "*" now gives the following output:

...
2021-01-07T15:56:22.777+0100    INFO    instance/beat.go:299    Setup Beat: metricbeat; Version: 7.10.0
2021-01-07T15:56:22.777+0100    DEBUG   [beat]  instance/beat.go:325    Initializing output plugins
2021-01-07T15:56:22.777+0100    INFO    [index-management]      idxmgmt/std.go:184      Set output.elasticsearch.index to 'metricbeat-7.10.0' as ILM is enabled.
2021-01-07T15:56:22.778+0100    DEBUG   [tls]   tlscommon/tls.go:172    Successfully loaded CA certificate: <path-to>-ca.crt
2021-01-07T15:56:22.778+0100    INFO    eslegclient/connection.go:99    elasticsearch url: https://localhost:9200
2021-01-07T15:56:22.778+0100    DEBUG   [publisher]     pipeline/consumer.go:148        start pipeline event consumer
2021-01-07T15:56:22.778+0100    INFO    [publisher]     pipeline/module.go:113  Beat name: abby.yatta.de
2021-01-07T15:56:22.791+0100    DEBUG   [modules]       beater/metricbeat.go:151        Available modules and metricsets: Register [...]
2021-01-07T15:56:22.791+0100    DEBUG   [get_aws_credentials]   aws/credentials.go:40   Using access_key_id, secret_access_key and/or session_token for AWS credential
2021-01-07T15:56:22.792+0100    DEBUG   [aws.ec2]       aws/aws.go:97   Metricset level config for period: 5m0s
2021-01-07T15:56:22.792+0100    DEBUG   [aws.ec2]       aws/aws.go:98   Metricset level config for tags filter: []
2021-01-07T15:56:22.792+0100    WARN    [aws.ec2]       aws/aws.go:99   extra charges on AWS API requests will be generated by this metricset
2021-01-07T15:56:23.228+0100    DEBUG   [aws.ec2]       aws/aws.go:121  AWS Credentials belong to account ID: <id>
2021-01-07T15:56:23.604+0100    DEBUG   [aws.ec2]       aws/aws.go:176  AWS Credentials belong to account ID: <id>
2021-01-07T15:56:24.007+0100    DEBUG   [aws.ec2]       aws/aws.go:139  Metricset level config for regions: [...]
2021-01-07T15:56:24.008+0100    DEBUG   [get_aws_credentials]   aws/credentials.go:40   Using access_key_id, secret_access_key and/or session_token for AWS credential
2021-01-07T15:56:24.008+0100    DEBUG   [aws.cloudwatch]        aws/aws.go:97   Metricset level config for period: 5m0s
2021-01-07T15:56:24.008+0100    DEBUG   [aws.cloudwatch]        aws/aws.go:98   Metricset level config for tags filter: []
2021-01-07T15:56:24.008+0100    WARN    [aws.cloudwatch]        aws/aws.go:99   extra charges on AWS API requests will be generated by this metricset
2021-01-07T15:56:24.381+0100    DEBUG   [aws.cloudwatch]        aws/aws.go:121  AWS Credentials belong to account ID: <id>
2021-01-07T15:56:24.758+0100    DEBUG   [aws.cloudwatch]        aws/aws.go:176  AWS Credentials belong to account ID: <id>
2021-01-07T15:56:25.157+0100    DEBUG   [aws.cloudwatch]        aws/aws.go:139  Metricset level config for regions: [...]
2021-01-07T15:56:25.157+0100    DEBUG   [cloudwatch]    cloudwatch/cloudwatch.go:127    cloudwatch config = {[{AWS/EC2 [] []  ec2:instance [] []}]}
2021-01-07T15:56:25.157+0100    DEBUG   [get_aws_credentials]   aws/credentials.go:40   Using access_key_id, secret_access_key and/or session_token for AWS credential
2021-01-07T15:56:25.157+0100    DEBUG   [aws.billing]   aws/aws.go:97   Metricset level config for period: 24h0m0s
2021-01-07T15:56:25.157+0100    DEBUG   [aws.billing]   aws/aws.go:98   Metricset level config for tags filter: []
2021-01-07T15:56:25.157+0100    WARN    [aws.billing]   aws/aws.go:99   extra charges on AWS API requests will be generated by this metricset
2021-01-07T15:56:25.528+0100    DEBUG   [aws.billing]   aws/aws.go:121  AWS Credentials belong to account ID: <id>
2021-01-07T15:56:25.905+0100    DEBUG   [aws.billing]   aws/aws.go:176  AWS Credentials belong to account ID: <id>
2021-01-07T15:56:26.306+0100    DEBUG   [aws.billing]   aws/aws.go:139  Metricset level config for regions: [eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2 ap-northeast-1 sa-east-1 ca-central-1 ap-southeast-1 ap-southeast-2 eu-central-1 us-east-1 us-east-2 us-west-1 us-west-2]
2021-01-07T15:56:26.306+0100    DEBUG   [billing]       billing/billing.go:91   cost explorer config = {{[AZ INSTANCE_TYPE SERVICE] [aws:createdBy]}}
2021-01-07T15:56:26.306+0100    INFO    instance/beat.go:455    metricbeat start running.
2021-01-07T15:56:26.306+0100    DEBUG   [module]        module/wrapper.go:127   Starting Wrapper[name=aws, len(metricSetWrappers)=1]
2021-01-07T15:56:26.306+0100    INFO    [monitoring]    log/log.go:118  Starting metrics logging every 30s
2021-01-07T15:56:26.306+0100    DEBUG   [module]        module/wrapper.go:127   Starting Wrapper[name=aws, len(metricSetWrappers)=1]
2021-01-07T15:56:26.306+0100    DEBUG   [module]        module/wrapper.go:127   Starting Wrapper[name=aws, len(metricSetWrappers)=1]
2021-01-07T15:56:26.307+0100    DEBUG   [cfgfile]       cfgfile/reload.go:132   Checking module configs from: /etc/metricbeat/modules.d/*.yml
2021-01-07T15:56:26.306+0100    DEBUG   [module]        module/wrapper.go:181   aws/ec2 will start after 9.512790174s
2021-01-07T15:56:26.307+0100    DEBUG   [module]        module/wrapper.go:181   aws/cloudwatch will start after 7.01882229s
2021-01-07T15:56:26.307+0100    DEBUG   [module]        module/wrapper.go:181   aws/billing will start after 6.509615922s
2021-01-07T15:56:26.307+0100    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: /etc/metricbeat/modules.d/aws.yml
2021-01-07T15:56:26.307+0100    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: /etc/metricbeat/modules.d/system.yml
2021-01-07T15:56:26.307+0100    DEBUG   [cfgfile]       cfgfile/reload.go:146   Number of module configs found: 9
2021-01-07T15:56:26.308+0100    DEBUG   [get_aws_credentials]   aws/credentials.go:74   Using shared credential profile for AWS credential
2021-01-07T15:56:26.312+0100    INFO    [monitoring]    log/log.go:153  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"user.slice"},"cpuacct":{"id":"user.slice","total":{"ns":14732106278}},"memory":{"id":"user.slice","mem":{"limit":{"bytes":9223372036854771712},"usage":{"bytes":231477248}}}},"cpu":{"system":{"ticks":30,"time":{"ms":34}},"total":{"ticks":270,"time":{"ms":276},"value":270},"user":{"ticks":240,"time":{"ms":242}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":17},"info":{"ephemeral_id":"aaec70ef-d934-4054-851a-4ad733f2757f","uptime":{"ms":3612}},"memstats":{"gc_next":18309104,"memory_alloc":13727312,"memory_total":41730768,"rss":85676032},"runtime":{"goroutines":49}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":3,"events":{"active":0}}},"system":{"cpu":{"cores":4},"load":{"1":0.15,"15":0.12,"5":0.17,"norm":{"1":0.0375,"15":0.03,"5":0.0425}}}}}}
2021-01-07T15:56:26.312+0100    INFO    [monitoring]    log/log.go:154  Uptime: 3.613754169s
2021-01-07T15:56:26.312+0100    INFO    [monitoring]    log/log.go:131  Stopping metrics logging.
2021-01-07T15:56:26.312+0100    INFO    instance/beat.go:461    metricbeat stopped.
2021-01-07T15:56:26.312+0100    ERROR   instance/beat.go:956    Exiting: 6 errors: metricset 'aws/elb' not found; metricset 'aws/natgateway' not found; error creating aws metricset: failed to retrieve aws credentials, please check AWS credential in config: EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make Client request
caused by: <?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
 <head>
  <title>404 - Not Found</title>
 </head>
 <body>
  <h1>404 - Not Found</h1>
 </body>
</html>
; metricset 'aws/transitgateway' not found; metricset 'aws/usage' not found; metricset 'aws/vpn' not found
Exiting: 6 errors: metricset 'aws/elb' not found; metricset 'aws/natgateway' not found; error creating aws metricset: failed to retrieve aws credentials, please check AWS credential in config: EC2RoleRequestError: no EC2 instance role found
caused by: EC2MetadataError: failed to make Client request
caused by: <?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
 <head>
  <title>404 - Not Found</title>
 </head>
 <body>
  <h1>404 - Not Found</h1>
 </body>
</html>
; metricset 'aws/transitgateway' not found; metricset 'aws/usage' not found; metricset 'aws/vpn' not found

Most important, I think, is:

error creating aws metricset: failed to retrieve aws credentials, please check AWS credential in config: EC2RoleRequestError: no EC2 instance role found

My AWS Admin told me that my account should now have the following AWS permissions:

ec2:DescribeInstances
ec2:DescribeRegions
cloudwatch:GetMetricData
cloudwatch:ListMetrics
sts:GetCallerIdentity
iam:ListAccountAliases
tag:getResources
sts:GetCallerIdentity
ce:GetCostAndUsage

Which should be necessary permissions for the metricsetzs: ec2, cloudwatch and billing. Or am I mistaken?

Hi @Jurilz :wave: Looking at your config, the ec2 metricset part and cloudwatch metricset part are basically doing the same thing so you are actually trying to collect ec2 metrics twice with this config. Also in your config, you have metricbeat.config.modules section to specify path. That means the config Metricbeat will look is modules.d/aws.yml. Did you run ./metricbeat modules enable aws? If so, what does modules.d/aws.yml look like?

For better eliminating the problem, could you try only with the ec2 metricset section in the config please? TIA!!

metricbeat.modules:
- module: aws
  period: 300s
  metricsets:
    - ec2
  access_key_id: '<pw>'
  secret_access_key: '<pw>'

Hi @Kaiyan_Sheng, thank you for your help.

Thank you also for the tipp, that the metricsets ec2 and cloudwatch are doing the same thing here. I was unsure about the difference between them, so I just put both in.

aws module should be enabled. ./metricbeat modules aws gives me:
Module aws is already enabled

the modules.d/aws.yml was probably part of the problem, because I never changed it so it stayed in the default state.

So I changed it to modules.d/aws.yml:

- module: aws
  period: 300s
  metricsets:
    - ec2
  access_key_id: '<key-id>'
  secret_access_key: '<secret-key>'

I also followed your suggestion and changed metricbeat.yml to :

path.home: /usr/share/metricbeat
path.config: /etc/metricbeat
path.data: /var/lib/metricbeat
path.logs: /var/log/

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

metricbeat.modules:
- module: aws
  period: 300s
  metricsets:
    - ec2
  access_key_id: '<key>'
  secret_access_key: '<secret-key>'

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: elastic
  password: <pw>
  ssl.certificate_authorities: ["<path-to>-ca.crt"]

setup.kibana:
    host: "http://localhost:5601"

I get that the definiton of the aws module is duplicated at this point, so I should probably take it out from the metricbeat.yml.

I still don't get any data from Metricbeat, but ./metricbeat -e -d "*" gives me never stopping output like:

2021-01-07T18:11:07.906+0100    DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://localhost:9200/_license?human=false  <nil>
2021-01-07T18:11:07.993+0100    DEBUG   [license]       licenser/check.go:31    Checking that license covers %sBasic
2021-01-07T18:11:07.993+0100    INFO    [license]       licenser/es_callback.go:51      Elasticsearch license: Basic
2021-01-07T18:11:07.993+0100    DEBUG   [esclientleg]   eslegclient/connection.go:290   ES Ping(url=https://localhost:9200)
2021-01-07T18:11:08.002+0100    DEBUG   [esclientleg]   eslegclient/connection.go:313   Ping status code: 200
2021-01-07T18:11:08.002+0100    INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.10.0
2021-01-07T18:11:08.002+0100    DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://localhost:9200/_xpack  <nil>
2021-01-07T18:11:08.087+0100    INFO    [index-management]      idxmgmt/std.go:261      Auto ILM enable success.
2021-01-07T18:11:08.087+0100    DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://localhost:9200/_ilm/policy/metricbeat  <nil>
2021-01-07T18:11:08.096+0100    INFO    [index-management.ilm]  ilm/std.go:139  do not generate ilm policy: exists=true, overwrite=false
2021-01-07T18:11:08.096+0100    INFO    [index-management]      idxmgmt/std.go:274      ILM policy successfully loaded.
2021-01-07T18:11:08.096+0100    INFO    [index-management]      idxmgmt/std.go:407      Set setup.template.name to '{metricbeat-7.10.0 {now/d}-000001}' as ILM is enabled.
2021-01-07T18:11:08.096+0100    INFO    [index-management]      idxmgmt/std.go:412      Set setup.template.pattern to 'metricbeat-7.10.0-*' as ILM is enabled.
2021-01-07T18:11:08.096+0100    INFO    [index-management]      idxmgmt/std.go:446      Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.10.0 {now/d}-000001} as ILM is enabled.
2021-01-07T18:11:08.097+0100    INFO    [index-management]      idxmgmt/std.go:450      Set settings.index.lifecycle.name in template to {metricbeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2021-01-07T18:11:08.097+0100    DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://localhost:9200/_cat/templates/metricbeat-7.10.0  <nil>
2021-01-07T18:11:08.106+0100    INFO    template/load.go:97     Template metricbeat-7.10.0 already exists and will not be overwritten.
2021-01-07T18:11:08.106+0100    INFO    [index-management]      idxmgmt/std.go:298      Loaded index template.
2021-01-07T18:11:08.106+0100    DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://localhost:9200/_alias/metricbeat-7.10.0  <nil>
2021-01-07T18:11:08.115+0100    INFO    [index-management]      idxmgmt/std.go:309      Write alias successfully generated.
2021-01-07T18:11:08.115+0100    DEBUG   [esclientleg]   eslegclient/connection.go:364   GET https://localhost:9200/  <nil>
2021-01-07T18:11:08.124+0100    INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(https://localhost:9200)) established
2021-01-07T18:11:08.170+0100    DEBUG   [elasticsearch] elasticsearch/client.go:230     PublishEvents: 21 events have been published to elasticsearch in 46.007978ms.
2021-01-07T18:11:08.171+0100    DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [0: 0, 21]
2021-01-07T18:11:08.171+0100    DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:21
2021-01-07T18:11:08.171+0100    DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack
2021-01-07T18:11:11.746+0100    DEBUG   [module]        module/wrapper.go:189   Starting metricSetWrapper[module=aws, name=ec2, host=]
2021-01-07T18:11:11.746+0100    DEBUG   [aws.ec2]       ec2/ec2.go:92   startTime = 2021-01-07 18:01:11.746275539 +0100 CET m=-591.274983020, endTime = 2021-01-07 18:11:11.746275539 +0100 CET m=+8.725016980

...

    "ppid": 1176,
    "pgid": 4635,
    "working_directory": "/var/lib/postgresql/10/main"
  },
  "event": {
    "duration": 47187242,
    "dataset": "system.process",
    "module": "system"
  },
  "metricset": {
    "name": "process",
    "period": 10000
  },
  "ecs": {
    "version": "1.6.0"
  },
  "agent": {
    "type": "metricbeat",
    "version": "7.10.0",
    "hostname": "<host>",
    "ephemeral_id": "<id>",
    "id": "<id>",
    "name": "<name>"
  }
}
2021-01-07T18:11:17.901+0100    DEBUG   [elasticsearch] elasticsearch/client.go:230     PublishEvents: 16 events have been published to elasticsearch in 46.068205ms.
2021-01-07T18:11:17.901+0100    DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [1: 0, 16]
2021-01-07T18:11:17.901+0100    DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:16
2021-01-07T18:11:17.901+0100    DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack

It just sends or tries to send data and counts up on 2021-01-07T18:11:08.171+0100 DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [0: 0, 21]

When I start metricbeat after that, the status is running, but elasticsearch and kibana still don't receive any data.

We've got the same issue going on right now. If we change the output to local filesystem we can see the data from AWS Cloudwatch. Once we switch the output to Elasticsearch, it never receives anything. Elastic Cloud logs don't show any issue. Metricbeat logs do not show any issue.

version: 7.10.1

A quick update. There seems to have been a similarly-described issue reported on Github for Metricbeat 7.9.2 but was closed.

We disabled the AWS module and enabled the System module and that was able to write at least into Elastic - confirming that our output is defined correctly and working.

Another interesting note is that CTRL+C doesn't work to kill Metricbeat when the AWS module was enabled and it needed an OS-level process kill. With System module, however, CTRL+C works and the Metricbeat logs routinely confirm publishing events to Elasticsearch.

Thank you! Now you have metricbeat.yml points to modules.d to get config, let's remove the metricbeat.modules section in metricbeat.yml:

- module: aws
  period: 300s
  metricsets:
    - ec2
  access_key_id: '<key>'
  secret_access_key: '<secret-key>'

(delete this part in metricbeat.yml)

Also could you add logging.level: debug in metricbeat.yml please? This way we can see some debug messages, maybe it will help.

Thanks @bplies! Are you using Elasticsearch on Elastic Cloud? I will try to reproduce this.

Thank you for the tipp. I added logging.level: debug and removed the aws module from metricbeat.yml:

path.home: /usr/share/metricbeat
path.config: /etc/metricbeat
path.data: /var/lib/metricbeat
path.logs: /var/log/

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: elastic
  password: <pw>
  ssl.certificate_authorities: ["<path-to>-ca.crt"]

setup.kibana:
    host: "http://localhost:5601"

logging.level: debug

modules.d/aws.yml:

- module: aws
  period: 300s
  metricsets:
    - ec2
  access_key_id: '<key>'
  secret_access_key: '<secret-key>'

metricbeat -e -d "*" gives me the following output:

2021-01-08T13:02:40.082+0100    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: /etc/metricbeat/modules.d/aws.yml
2021-01-08T13:02:40.082+0100    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: /etc/metricbeat/modules.d/system.yml
2021-01-08T13:02:40.083+0100    DEBUG   [cfgfile]       cfgfile/reload.go:146   Number of module configs found: 4
2021-01-08T13:02:40.083+0100    DEBUG   [get_aws_credentials]   aws/credentials.go:40   Using access_key_id, secret_access_key and/or session_token for AWS credential
2021-01-08T13:02:40.083+0100    DEBUG   [aws.ec2]       aws/aws.go:97   Metricset level config for period: 5m0s
2021-01-08T13:02:40.083+0100    DEBUG   [aws.ec2]       aws/aws.go:98   Metricset level config for tags filter: []
2021-01-08T13:02:40.083+0100    WARN    [aws.ec2]       aws/aws.go:99   extra charges on AWS API requests will be generated by this metricset
2021-01-08T13:02:40.507+0100    DEBUG   [aws.ec2]       aws/aws.go:121  AWS Credentials belong to account ID: <id>
2021-01-08T13:02:40.890+0100    DEBUG   [aws.ec2]       aws/aws.go:176  AWS Credentials belong to account ID: <id>
2021-01-08T13:02:41.478+0100    DEBUG   [aws.ec2]       aws/aws.go:139  Metricset level config for regions: [...]
2021-01-08T13:02:41.479+0100    DEBUG   [system.process]        process/process.go:86   process cgroup data collection is enabled, using hostfs=''
2021-01-08T13:02:41.479+0100    INFO    filesystem/filesystem.go:57     Ignoring filesystem types: sysfs, tmpfs, bdev, proc, cgroup, cgroup2, cpuset, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, bpf, pipefs, ramfs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, mqueue, pstore, autofs
2021-01-08T13:02:41.479+0100    INFO    [system.fsstat] fsstat/fsstat.go:57     Ignoring filesystem types: %ssysfs, tmpfs, bdev, proc, cgroup, cgroup2, cpuset, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, bpf, pipefs, ramfs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, mqueue, pstore, autofs
2021-01-08T13:02:41.480+0100    INFO    cfgfile/reload.go:164   Config reloader started
2021-01-08T13:02:41.480+0100    DEBUG   [cfgfile]       cfgfile/reload.go:194   Scan for new config files
2021-01-08T13:02:41.480+0100    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: /etc/metricbeat/modules.d/aws.yml
2021-01-08T13:02:41.480+0100    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: /etc/metricbeat/modules.d/system.yml
2021-01-08T13:02:41.480+0100    DEBUG   [cfgfile]       cfgfile/reload.go:213   Number of module configs found: 4
2021-01-08T13:02:41.480+0100    DEBUG   [reload]        cfgfile/list.go:63      Starting reload procedure, current runners: 0
2021-01-08T13:02:41.481+0100    DEBUG   [reload]        cfgfile/list.go:81      Start list: 4, Stop list: 0
2021-01-08T13:02:41.481+0100    DEBUG   [get_aws_credentials]   aws/credentials.go:40   Using access_key_id, secret_access_key and/or session_token for AWS credential
2021-01-08T13:02:41.481+0100    DEBUG   [aws.ec2]       aws/aws.go:97   Metricset level config for period: 5m0s
2021-01-08T13:02:41.481+0100    DEBUG   [aws.ec2]       aws/aws.go:98   Metricset level config for tags filter: []
2021-01-08T13:02:41.481+0100    WARN    [aws.ec2]       aws/aws.go:99   extra charges on AWS API requests will be generated by this metricset
2021-01-08T13:02:41.853+0100    DEBUG   [aws.ec2]       aws/aws.go:121  AWS Credentials belong to account ID: <id>
2021-01-08T13:02:42.228+0100    DEBUG   [aws.ec2]       aws/aws.go:176  AWS Credentials belong to account ID: <id>
2021-01-08T13:02:42.635+0100    DEBUG   [aws.ec2]       aws/aws.go:139  Metricset level config for regions: [...]
2021-01-08T13:02:42.635+0100    DEBUG   [reload]        cfgfile/list.go:105     Starting runner: RunnerGroup{aws [metricsets=1]}
2021-01-08T13:02:42.636+0100    DEBUG   [module]        module/wrapper.go:127   Starting Wrapper[name=aws, len(metricSetWrappers)=1]
2021-01-08T13:02:42.636+0100    DEBUG   [module]        module/wrapper.go:189   Starting metricSetWrapper[module=aws, name=ec2, host=]
2021-01-08T13:02:42.637+0100    DEBUG   [system.process]        process/process.go:86   process cgroup data collection is enabled, using hostfs=''
2021-01-08T13:02:42.637+0100    DEBUG   [aws.ec2]       ec2/ec2.go:92   startTime = 2021-01-08 12:52:42.637116494 +0100 CET m=-597.345239644, endTime = 2021-01-08 13:02:42.637116494 +0100 CET m=+2.654760356
2021-01-08T13:02:42.638+0100    DEBUG   [reload]        cfgfile/list.go:105     Starting runner: RunnerGroup{system [metricsets=1], system [metricsets=1], system [metricsets=1], system [metricsets=1], system [metricsets=1], system [metricsets=1], system [metricsets=1]}
2021-01-08T13:02:42.639+0100    DEBUG   [module]        module/wrapper.go:127   Starting Wrapper[name=system, len(metricSetWrappers)=1]
...
2021-01-08T13:02:42.639+0100    DEBUG   [module]        module/wrapper.go:127   Starting Wrapper[name=system, len(metricSetWrappers)=1]
2021-01-08T13:02:42.639+0100    DEBUG   [module]        module/wrapper.go:189   Starting metricSetWrapper[module=system, name=load, host=]
...
2021-01-08T13:02:42.639+0100    DEBUG   [module]        module/wrapper.go:189   Starting metricSetWrapper[module=system, name=process_summary, host=]
2021-01-08T13:02:42.639+0100    INFO    filesystem/filesystem.go:57     Ignoring filesystem types: sysfs, tmpfs, bdev, proc, cgroup, cgroup2, cpuset, devtmpfs, configfs, debugfs, tracefs, securityfs, sockfs, bpf, pipefs, ramfs, hugetlbfs, devpts, ecryptfs, fuse, fusectl, mqueue, pstore, autofs
2021-01-08T13:02:42.640+0100    DEBUG   [processors]    processing/processors.go:203    Publish event: {...}
...
2021-01-08T13:02:42.650+0100    DEBUG   [module]        module/wrapper.go:189   Starting metricSetWrapper[module=system, name=fsstat, host=]
2021-01-08T13:02:42.659+0100    DEBUG   [system.fsstat] fsstat/fsstat.go:87     filesystem: /var/lib/lxcfs total=0, used=0, free=0
2021-01-08T13:02:42.659+0100    DEBUG   [system.fsstat] fsstat/fsstat.go:87     filesystem: / total=312201752576, used=60525809664, free=251675942912
2021-01-08T13:02:42.650+0100    DEBUG   [module]        module/wrapper.go:189   Starting metricSetWrapper[module=system, name=uptime, host=]
2021-01-08T13:02:42.660+0100    DEBUG   [publisher]     pipeline/client.go:231  Pipeline client receives callback 'onFilteredOut' for event: {Timestamp:2021-01-08 13:02:42.650010207 +0100 CET m=+2.667654210 Meta:null Fields:{"event":{"dataset":"system.filesystem","duration":10239381,"module":"system"},"metricset":{"name":"filesystem","period":60000},"service":{"type":"system"},"system":{"filesystem":{"available":0,"device_name":"/dev/loop2","files":10809,"free":0,"free_files":0,"mount_point":"/snap/core18/1944","total":58195968,"type":"squashfs","used":{"bytes":58195968,"pct":1.000000}}}} Private:<nil> TimeSeries:true}
2021-01-08T13:02:42.660+0100    DEBUG   [publisher]     pipeline/client.go:231  Pipeline client receives callback 'onFilteredOut' for event: {Timestamp:2021-01-08 13:02:42.650010207 +0100 CET m=+2.667654210 Meta:null Fields:{"event":{"dataset":"system.filesystem","duration":10281937,"module":"system"},"metricset":{"name":"filesystem","period":60000},"service":{"type":"system"},"system":{"filesystem":{"available":0,"device_name":"/dev/loop3","files":15,"free":0,"free_files":0,"mount_point":"/snap/amazon-ssm-agent/2333","total":29491200,"type":"squashfs","used":{"bytes":29491200,"pct":1.000000}}}} Private:<nil> TimeSeries:true}
...
2021-01-08T13:02:42.661+0100    DEBUG   [publisher]     pipeline/client.go:231  Pipeline client receives callback 'onFilteredOut' for event: {Timestamp:2021-01-08 13:02:42.650010207 +0100 CET m=+2.667654210 Meta:null Fields:{"event":{"dataset":"system.filesystem","duration":10964282,"module":"system"},"metricset":{"name":"filesystem","period":60000},"service":{"type":"system"},"system":{"filesystem":{"available":0,"device_name":"/dev/loop5","files":10779,"free":0,"free_files":0,"mount_point":"/snap/core18/1932","total":58064896,"type":"squashfs","used":{"bytes":58064896,"pct":1.000000}}}} Private:<nil> TimeSeries:true}
2021-01-08T13:02:42.661+0100    DEBUG   [processors]    processing/processors.go:203    Publish event: {...}
...
2021-01-08T13:05:33.500+0100    DEBUG   [elasticsearch] elasticsearch/client.go:230     PublishEvents: 16 events have been published to elasticsearch in 69.441716ms.
2021-01-08T13:05:33.500+0100    DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [1: 0, 16]
2021-01-08T13:05:33.501+0100    DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:16
2021-01-08T13:05:33.501+0100    DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack
...
2021-01-08T13:06:52.541+0100    DEBUG   [processors]    processing/processors.go:203    Publish event: {
  "@timestamp": "2021-01-08T12:06:52.495Z",
  "@metadata": {
    "beat": "metricbeat",
    "type": "_doc",
    "version": "7.10.0"
  },
  "agent": {
    "ephemeral_id": "<id>",
    "id": "<id>",
    "name": "<name>",
    "type": "metricbeat",
    "version": "7.10.0",
    "hostname": "<host>"
  },
  "ecs": {
    "version": "1.6.0"
  },
  "service": {
    "type": "system"
  },
  "system": {
    "process": {
      "cmdline": "<db>: 10/main: <process> idle                                                         ",
      "cgroup": {
        "path": "/...",
        "cpu": {
          "path": "/...",
          "cfs": {
            "quota": {
              "us": 0
            },
            "shares": 1024,
            "period": {
              "us": 100000
            }
          },
          "rt": {
            "period": {
              "us": 0
            },
            "runtime": {
              "us": 0
            }
          },
          "stats": {...},
          "id": "<id>"
        },
        "cpuacct": {
          "percpu": {...},
          "id": "<id>",
          "path": "/...",
          "total": {
            "ns": 54382023346
          },
          "stats": {...}
        },
        "memory": {
          "kmem_tcp": {...},
            "usage": {...}
          },
          "stats": {...},
          "id": "<id>",
          "path": "/...",
          "mem": {
            "failures": 0,
            "limit": {
              "bytes": 9223372036854771712
            },
            "usage": {
              "bytes": 271331328,
              "max": {
                "bytes": 275591168
              }
            }
          },
          "memsw": {...},
          "kmem": {
            "failures": 0,
            "limit": {
              "bytes": 9223372036854771712
            },
            "usage": {...}
          }
        },
        "blkio": {...},
        "id": "<id>"
      },
      "memory": {...},
      "cpu": {
        "total": {...},
        "start_time": "2021-01-07T12:54:40.000Z"
      },
      "fd": {...},
      "state": "sleeping"
    }
  },
  "event": {
    "dataset": "system.process",
    "module": "system",
    "duration": 44376794
  },
  "host": {
    "name": "<name>"
  },
  "cloud": {
    "region": "...",
    "provider": "aws",
    "availability_zone": "<zone>",
    "account": {
      "id": "<id>"
    },
    "image": {
      "id": "ami-<id>"
    },
    "instance": {
      "id": "i-<id>"
    },
    "machine": {
      "type": "<type>"
    }
  },
  "metricset": {
    "period": 10000,
    "name": "process"
  },
  "process": {...},
  "user": {
    "name": "<name>"
  }
}
...
2021-01-08T13:06:13.484+0100    DEBUG   [elasticsearch] elasticsearch/client.go:230     PublishEvents: 16 events have been published to elasticsearch in 52.976221ms.
2021-01-08T13:06:13.484+0100    DEBUG   [publisher]     memqueue/ackloop.go:160 ackloop: receive ack [5: 0, 16]
2021-01-08T13:06:13.484+0100    DEBUG   [publisher]     memqueue/ackloop.go:128 ackloop: return ack to broker loop:16
2021-01-08T13:06:13.484+0100    DEBUG   [publisher]     memqueue/ackloop.go:131 ackloop:  done send ack

and keeps going on like that.

Yes. On Elastic Cloud.

It seems that there could be a connection problem from metricbeat to elasticsearch.

This is the output from sudo service metricbeat status after starting metricbeat with the new loglevel (debug):

Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.031+0100        DEBUG        [esclientleg]        eslegclient/connection.go:290        ES Ping(url=https://localhost:9200)
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.142+0100        DEBUG        [esclientleg]        eslegclient/connection.go:313        Ping status code: 200
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.142+0100        INFO        [esclientleg]        eslegclient/connection.go:314        Attempting to connect to Elasticsearch version 7.10.0
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.142+0100        DEBUG        [esclientleg]        eslegclient/connection.go:364        GET https://localhost:9200/_license?human=false  <nil>
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.228+0100        DEBUG        [license]        licenser/check.go:31        Checking that license covers %sBasic
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.228+0100        INFO        [license]        licenser/es_callback.go:51        Elasticsearch license: Basic
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.228+0100        DEBUG        [esclientleg]        eslegclient/connection.go:290        ES Ping(url=https://localhost:9200)
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.236+0100        DEBUG        [esclientleg]        eslegclient/connection.go:313        Ping status code: 200
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.236+0100        INFO        [esclientleg]        eslegclient/connection.go:314        Attempting to connect to Elasticsearch version 7.10.0
Jan 09 14:29:05 <host> metricbeat[24560]: 2021-01-09T14:29:05.236+0100        DEBUG        [esclientleg]        eslegclient/connection.go:364        GET https://localhost:9200/_xpack  <nil>

especially the GET https://localhost:9200/_xpack <nil> somehow worries me.

This is my elasticsearch.yml:

network.host: 0.0.0.0

# security settings
xpack.security.enabled: true

# transportationlayer
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.certificate_authorities: ["<path-to>-ca.crt"]
xpack.security.transport.ssl.certificate: "<path-to>-elasticsearch.crt"
xpack.security.transport.ssl.key: "<path-to>-elasticsearch.key"


# This turns on SSL for the HTTP (Rest) interface
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.certificate_authorities: ["<path-to>-ca.crt"]
xpack.security.http.ssl.keystore.path: "<path-to>-http.p12"
xpack.security.http.ssl.keystore.password: "<pw>"

@Jurilz Do you see any errors? Could you describe more about what this possible connection problem? Do you see any unsuccessful requests? Any errors on Elasticsearch's logs? Also please make sure your configuration is not missing any pieces about Secure communication with Elasticsearch.

1 Like

Elasticsearch now successfully receives data from Metricbeat. :grinning:

Although the data is not the default data that the Kibana Dashboard is expecting, but I would think this is a topic for another thread.

I did to things, that could made the difference:

  1. There was an old metricbeat index in elasticsearch and I deleted it manually.

  2. I added the username and password to the the kibana host configuration.

my metricbeat.yml:

path.home: /usr/share/metricbeat
path.config: /etc/metricbeat
path.data: /var/lib/metricbeat
path.logs: /var/log/

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: elastic
  password: <pw>
  ssl.certificate_authorities: ["<path-to>-ca.crt"]

setup.kibana:
  host: "http://localhost:5601"
  username: elastic
  password: <pw>

Thank you again @ChrsMark and @Kaiyan_Sheng for your help.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.