Metricbeat (6.6.0) Windows Module Breaks in Win7 server

Hi-
I was trying to pull the perfmon data from a Windows-7 Ultimate server, but it fails abruptly. When I enabled the debug mode and test the modules, I see the below log:

C:\Users\Administrator\Desktop\metricbeat-6.6.0-windows-x86>metricbeat.exe test
modules -d "*" -v -e
2019-02-08T08:44:09.230Z        INFO    instance/beat.go:616    Home path: [C:\U
sers\Administrator\Desktop\metricbeat-6.6.0-windows-x86] Config path: [C:\Users\
Administrator\Desktop\metricbeat-6.6.0-windows-x86] Data path: [C:\Users\Adminis
trator\Desktop\metricbeat-6.6.0-windows-x86\data] Logs path: [C:\Users\Administr
ator\Desktop\metricbeat-6.6.0-windows-x86\logs]
2019-02-08T08:44:09.256Z        DEBUG   [beat]  instance/beat.go:653    Beat met
adata path: C:\Users\Administrator\Desktop\metricbeat-6.6.0-windows-x86\data\met
a.json
2019-02-08T08:44:09.257Z        INFO    instance/beat.go:623    Beat UUID: d7c17
b11-1f19-464d-966f-ec0c72085a7d
2019-02-08T08:44:09.257Z        DEBUG   [modules]       beater/metricbeat.go:103
        Register [ModuleFactory:[docker, mongodb, mysql, postgresql, system, uws
gi, windows], MetricSetFactory:[aerospike/namespace, apache/status, ceph/cluster
_disk, ceph/cluster_health, ceph/cluster_status, ceph/monitor_health, ceph/osd_d
f, ceph/osd_tree, ceph/pool_disk, couchbase/bucket, couchbase/cluster, couchbase
/node, docker/container, docker/cpu, docker/diskio, docker/healthcheck, docker/i
mage, docker/info, docker/memory, docker/network, dropwizard/collector, elastics
earch/ccr, elasticsearch/cluster_stats, elasticsearch/index, elasticsearch/index
_recovery, elasticsearch/index_summary, elasticsearch/ml_job, elasticsearch/node
, elasticsearch/node_stats, elasticsearch/pending_tasks, elasticsearch/shard, en
voyproxy/server, etcd/leader, etcd/self, etcd/store, golang/expvar, golang/heap,
 graphite/server, haproxy/info, haproxy/stat, http/json, http/server, jolokia/jm
x, kafka/consumergroup, kafka/partition, kibana/stats, kibana/status, kubernetes
/apiserver, kubernetes/container, kubernetes/event, kubernetes/node, kubernetes/
pod, kubernetes/state_container, kubernetes/state_deployment, kubernetes/state_n
ode, kubernetes/state_pod, kubernetes/state_replicaset, kubernetes/state_statefu
lset, kubernetes/system, kubernetes/volume, kvm/dommemstat, logstash/node, logst
ash/node_stats, memcached/stats, mongodb/collstats, mongodb/dbstats, mongodb/met
rics, mongodb/replstatus, mongodb/status, munin/node, mysql/galera_status, mysql
/status, nginx/stubstatus, php_fpm/pool, php_fpm/process, postgresql/activity, p
ostgresql/bgwriter, postgresql/database, postgresql/statement, prometheus/collec
tor, prometheus/stats, rabbitmq/connection, rabbitmq/exchange, rabbitmq/node, ra
bbitmq/queue, redis/info, redis/keyspace, system/core, system/cpu, system/diskio
, system/filesystem, system/fsstat, system/memory, system/network, system/proces
s, system/process_summary, system/raid, system/socket_summary, system/uptime, tr
aefik/health, uwsgi/status, vsphere/datastore, vsphere/host, vsphere/virtualmach
ine, windows/perfmon, windows/service, zookeeper/mntr]]
2019-02-08T08:44:09.261Z        DEBUG   [cfgfile]       cfgfile/cfgfile.go:177
Load config from file: C:\Users\Administrator\Desktop\metricbeat-6.6.0-windows-x
86\modules.d\windows.yml
2019-02-08T08:44:09.267Z        INFO    helper/privileges_windows.go:79 Metricbe
at process and system info: {"OSVersion":{"Major":6,"Minor":1,"Build":7601},"Arc
h":"386","NumCPU":2,"User":{"SID":"S-1-5-21-2109222410-2544541688-253984821-500"
,"Account":"Administrator","Domain":"TestServer","Type":1},"ProcessPrivs":{"SeBack
upPrivilege":{"enabled":false},"SeChangeNotifyPrivilege":{"enabled_by_default":t
rue,"enabled":true},"SeCreateGlobalPrivilege":{"enabled_by_default":true,"enable
d":true},"SeCreatePagefilePrivilege":{"enabled":false},"SeCreateSymbolicLinkPriv
ilege":{"enabled":false},"SeDebugPrivilege":{"enabled":false},"SeImpersonatePriv
ilege":{"enabled_by_default":true,"enabled":true},"SeIncreaseBasePriorityPrivile
ge":{"enabled":false},"SeIncreaseQuotaPrivilege":{"enabled":false},"SeIncreaseWo
rkingSetPrivilege":{"enabled":false},"SeLoadDriverPrivilege":{"enabled":false},"
SeManageVolumePrivilege":{"enabled":false},"SeProfileSingleProcessPrivilege":{"e
nabled":false},"SeRemoteShutdownPrivilege":{"enabled":false},"SeRestorePrivilege
":{"enabled":false},"SeSecurityPrivilege":{"enabled":false},"SeShutdownPrivilege
":{"enabled":false},"SeSystemEnvironmentPrivilege":{"enabled":false},"SeSystemPr
ofilePrivilege":{"enabled":false},"SeSystemtimePrivilege":{"enabled":false},"SeT
akeOwnershipPrivilege":{"enabled":false},"SeTimeZonePrivilege":{"enabled":false}
,"SeUndockPrivilege":{"enabled":false}}}
2019-02-08T08:44:09.272Z        INFO    helper/privileges_windows.go:111
SeDebugPrivilege is now enabled. SeDebugPrivilege=(Enabled)
2019-02-08T08:44:09.272Z        WARN    [cfgwarn]       perfmon/perfmon.go:59
BETA: The perfmon metricset is beta
windows...
  perfmon...2019-02-08T08:44:10.419Z    DEBUG   [module]        module/wrapper.g
o:179   Starting metricSetWrapper[module=windows, name=perfmon, host=]
2019-02-08T08:44:10.420Z        DEBUG   [perfmon]       perfmon/pdh_windows.go:3
97      Ignoring the first measurement because the data isn't ready     {"error"
: "The returned data is not valid.", "perfmon": {"query": "\\PhysicalDisk(*)\\%
Disk Write Time"}}
2019-02-08T08:44:10.421Z        DEBUG   [perfmon]       perfmon/pdh_windows.go:3
97      Ignoring the first measurement because the data isn't ready     {"error"
: "The data is not valid.", "perfmon": {"query": "\\Processor Information(_Total
)\\% Processor Time"}}
2019-02-08T08:44:10.421Z        DEBUG   [perfmon]       perfmon/pdh_windows.go:3
97      Ignoring the first measurement because the data isn't ready     {"error"
: "The returned data is not valid.", "perfmon": {"query": "\\PhysicalDisk(*)\\Di
sk Writes/sec"}}

    error... ERROR timeout waiting for an event
2019-02-08T08:44:15.427Z        DEBUG   [module]        module/wrapper.go:202
Stopped metricSetWrapper[module=windows, name=perfmon, host=]

When I execute metricbeat.exe -c metricbeat.yml, here is the error log:

2019-02-08T08:47:41.093Z	ERROR	runtime/panic.go:35	recovered from panic while fetching 'windows/perfmon' for host ''. Recovering, but please report this.	{"panic": "runtime error: slice bounds out of range", "stack": "github.com/elastic/beats/libbeat/logp.Recover\n\t/go/src/github.com/elastic/beats/libbeat/logp/global.go:105\nruntime.call16\n\t/usr/local/go/src/runtime/asm_386.s:629\nruntime.gopanic\n\t/usr/local/go/src/runtime/panic.go:502\nruntime.panicslice\n\t/usr/local/go/src/runtime/panic.go:35\ngithub.com/elastic/beats/metricbeat/module/windows/perfmon.PdhGetFormattedCounterArray\n\t/go/src/github.com/elastic/beats/metricbeat/module/windows/perfmon/pdh_windows.go:151\ngithub.com/elastic/beats/metricbeat/module/windows/perfmon.(*Query).Values\n\t/go/src/github.com/elastic/beats/metricbeat/module/windows/perfmon/pdh_windows.go:286\ngithub.com/elastic/beats/metricbeat/module/windows/perfmon.(*PerfmonReader).Read\n\t/go/src/github.com/elastic/beats/metricbeat/module/windows/perfmon/pdh_windows.go:386\ngithub.com/elastic/beats/metricbeat/module/windows/perfmon.(*MetricSet).Fetch\n\t/go/src/github.com/elastic/beats/metricbeat/module/windows/perfmon/perfmon.go:93\ngithub.com/elastic/beats/metricbeat/mb/module.(*metricSetWrapper).fetch\n\t/go/src/github.com/elastic/beats/metricbeat/mb/module/wrapper.go:238\ngithub.com/elastic/beats/metricbeat/mb/module.(*metricSetWrapper).startPeriodicFetching\n\t/go/src/github.com/elastic/beats/metricbeat/mb/module/wrapper.go:219\ngithub.com/elastic/beats/metricbeat/mb/module.(*metricSetWrapper).run\n\t/go/src/github.com/elastic/beats/metricbeat/mb/module/wrapper.go:196\ngithub.com/elastic/beats/metricbeat/mb/module.(*Wrapper).Start.func1\n\t/go/src/github.com/elastic/beats/metricbeat/mb/module/wrapper.go:137"}

Hi @adrisr, @maddin2016 - Any help on this? Thanks !

Just letting you know, this happens when we add the counter instance as (*), if we query for an individual process, it works alright.

Fails with the below query:

- module: windows
  metricsets: [perfmon]
  period: 5s
  perfmon.ignore_non_existent_counters: true
  perfmon.counters:
    - instance_label: paging_file_usage
      measurement_label: paging.file.usage
      query: '\Paging File(*)\% Usage'

Works alright with the below query:

- module: windows
  metricsets: [perfmon]
  period: 5s
  perfmon.ignore_non_existent_counters: true
  perfmon.counters:
    - instance_label: paging_file_usage
      measurement_label: paging.file.usage
      query: '\Paging File(_Total)\% Usage'

This looks like a bug. Is this the same issue as filed here? https://github.com/elastic/beats/issues/10660 (is it the same person? :slight_smile: )

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.