Could not get Windows Performance Counters since Metricbeats 7.3.0

I use Metricbeats to send Windows Perfomance Counters, like processor time and private bytes to elasticsearch. Until Metricbeats version 7.2.0 it runs well, but the same configuration doesn't run in version 7.3.0 and 7.3.1. If I run "metricbeats.exe test modules", I get an error that the performance counter does not exist. But it exists and the Metricbeats 7.2.0 can access this performance counter.
The error message, that I get from "metricbeats.exe test modules" is: metricbeat.exe test modules
Error getting metricbeat modules: module initialization error: 1 error: initialization of reader failed: failed to expand counter (query="\Process(bbGateService)% Processor Time"): Das angegebene Objekt wurde nicht auf dem Computer gefunden.

The content of the windows.yml file:

- module: windows
  metricsets: ["perfmon"]
  period: 1s
  perfmon.counters:
# CPU
    - instance_label: "ECM_Gate2"
      instance_name: "gate2"
      measurement_label: "ecm.gate2.gateservice.cpu"
      query: '\Process(bbGateService)\% Processor Time'
      format: "float"
# Memory
    - instance_label: "ECM_Gate2"
      instance_name: "gate2"
      measurement_label: "ecm.gate2.gateservice.memory"
      query: '\Process(bbGateService)\Private Bytes'
      format: "float"

hi @jbeyer ,

The configuration in your windows.yml file seems to be correct. Can you provide us with more details on the operating system version, processor type in order to test this scenario out?
Also, it is strange that the query value in the error is missing a backslash \ before the percent sign. Might not be related.
: initialization of reader failed: failed to expand counter (query="\Process(bbGateService)% Processor Time")

The missing Bakslash was eliminated from the website, in original the error message includes the Bakslash before the % sign.
Here are the details of the machine: It is a bare metal machine with Windows Server 2016 (Version 1607 (Build 14393.3181)) with 2x Intel Xeon CPU E5-2620 2.00 GHz and 32 GB RAM.
On another machine with the same OS- version and 2x Intel Xeon CPU E5-2620 v2 2.10 GHz + 64 GB RAM, there is the same result.

The problem are existing also in Metricbeat version 7.3.2.

hi @jbeyer, I am not able to reproduce this issue with the os version provided and a random process running on the machine.
The error message (in german above) seems to be resulted from the AddCounter function which is not able to find the object on the computer("Unable to find the specified object on the computer or in the log file.")
Can you check the following

  1. Running Get-Counter -Counter "\Process(bbGateService)% Processor Time" from Powershell returns any values
  2. Using another process currently running on the machine in the config file returns any values or the same error message is returned.

You mention using an earlier version of metricbeat you are able to get results, are there any differences when setting up the 2 different versions? (using same user, are they both installed as service?)

1 Like

Hi MarianaD, sorry I forgot to tell you, that I use a German version of Windows Server 2016. May be it is a problem with the localised names of the default perfomance counters.
If I run the powershell command: Get-Counter -Counter "\Process(bbGateService)% Processor Time" than it tells me "object not found". (allthough Metricbeats 7.2 find this performance counter)
If I run the powershell with the localised object: Get-Counter -Counter "\Prozess(bbGateService)\Prozessorzeit (%)", than the object is found (for powershell).

Timestamp CounterSamples


26.09.2019 11:23:56 \dev2\prozess(bbgateservice)\prozessorzeit (%) :
425,364969999959

Neither the english nor the localised (German) version does run in Metricbeat 7.3.0 and 7.3.2.

query: '\Prozess(bbGateService)\Prozessorzeit (%)'

Own performance counters from our app (that are not localised), like

query: '\ElsbethCommunicationManager:Gate(gate1)\ChangeListeningDurationAvg'

have no problem.
The problem occours on all performance counters with "% Processor Time" and "Private Bytes", that I use.

Only the Version 7.2 is installed as service, version 7.3 does not run :unamused:.
I made a copy of the folder from Metricbeats 7.2.0 (what actual runs) and copied the config files from 7.3.2 to this folder, so I have the exact same configursation on both versions and than I run on command line: metricbeat.exe test modules for both versions.
For version 7.2.0 I get "OK" and for version 7.3.2 I get the Error described above.

The problem still occours in version 7.4.0!!

I face the same issue with windows 10 in german. Just wanted to test metricbeat on it but get for all valid perfcounter the following error:

Exiting: 1 error: initialization of reader failed: failed to add counter (query="\Prozessor(_total)\Prozessorzeit (%)"): Das angegebene Objekt wurde nicht auf dem Computer gefunden.

But Powershell shows, that it's a valid counter:

(Get-Counter -ListSet "Prozessor").Counter

\Prozessor()\Prozessorzeit (%)
\Prozessor(
)\Benutzerzeit (%)
\Prozessor()\Privilegierte Zeit (%)
\Prozessor(
)\Interrupts/s
\Prozessor()\DPC-Zeit (%)
\Prozessor(
)\Interruptzeit (%)
\Prozessor()\DPCs in Warteschlange/s
\Prozessor(
)\DPC-Rate
\Prozessor()\Leerlaufzeit (%)
\Prozessor(
)% C1-Zeit
\Prozessor()% C2-Zeit
\Prozessor(
)% C3-Zeit
\Prozessor()\C1-Übergänge/s
\Prozessor(
)\C2-Übergänge/s
\Prozessor(*)\C3-Übergänge/s

Get-Counter -Counter "\Prozessor(*)\Prozessorzeit (%)"

Timestamp CounterSamples


12.10.2019 20:35:53 \mycomputer\prozessor(_total)\prozessorzeit (%) :
0,449514067477685

The problem still occours in version 7.4.1!!

I am running into this problem, I think. I am using MetricBeat on Windows Server 2019 and Server 2008 R2. I have used Metricbeat v 7.4.2 and get the following messages:

Exiting: 1 error: initialization of reader failed: failed to expand counter (query="\Processor Information(_Total)\% Processor Time")

This is the bog-standard windows.yml that ships with Metricbeat.

Unlike the other posters, on some servers I have been able to get Metricbeat working if I manually run it first (run as administrator). Then I start the service and it works. On other servers this does not help. I cannot easily replicate the situation, and I do not know why. Once I managed to delete the logs and data folder(s) on a running instance, at which point the service stopped working. I then re-ran metricbeat manually for a minute, stopped that, restarted the service and everything worked.

I will reply to this post with some (hopefully anonymous-enough) log excerpts to demonstrate the different situations.

The difference appears to be that in the working case the service successfully connects to elasticsearch, and in the other it does not. In both cases the service is aware of the right IP and port.

Based on this thread I installed v7.2.1 of Metricbeat on a server and it seems to be working flawlessly. This is a workaround but not a solution.

Here are excerpts showing the differences in the runs. Here is a run that failed:

2019-11-13T01:50:13.190-0500	INFO	instance/beat.go:422	metricbeat start running.
2019-11-13T01:50:13.190-0500	INFO	helper/privileges_windows.go:79	Metricbeat process and system info: {"OSVersion":{"Major":6,"Minor":1,"Build":7601},"Arch":"amd64","NumCPU":1,"User":{"SID":"S-1-5-18","Account":"SYSTEM","Domain":"NT AUTHORITY","Type":1},"ProcessPrivs":{"SeAssignPrimaryTokenPrivilege":{"enabled":false},"SeAuditPrivilege":{"enabled_by_default":true,"enabled":true},"SeBackupPrivilege":{"enabled":false},"SeChangeNotifyPrivilege":{"enabled_by_default":true,"enabled":true},"SeCreateGlobalPrivilege":{"enabled_by_default":true,"enabled":true},"SeCreatePagefilePrivilege":{"enabled_by_default":true,"enabled":true},"SeCreatePermanentPrivilege":{"enabled_by_default":true,"enabled":true},"SeCreateSymbolicLinkPrivilege":{"enabled_by_default":true,"enabled":true},"SeDebugPrivilege":{"enabled_by_default":true,"enabled":true},"SeImpersonatePrivilege":{"enabled_by_default":true,"enabled":true},"SeIncreaseBasePriorityPrivilege":{"enabled_by_default":true,"enabled":true},"SeIncreaseQuotaPrivilege":{"enabled":false},"SeIncreaseWorkingSetPrivilege":{"enabled_by_default":true,"enabled":true},"SeLoadDriverPrivilege":{"enabled":false},"SeLockMemoryPrivilege":{"enabled_by_default":true,"enabled":true},"SeManageVolumePrivilege":{"enabled":false},"SeProfileSingleProcessPrivilege":{"enabled_by_default":true,"enabled":true},"SeRestorePrivilege":{"enabled":false},"SeSecurityPrivilege":{"enabled":false},"SeShutdownPrivilege":{"enabled":false},"SeSystemEnvironmentPrivilege":{"enabled":false},"SeSystemProfilePrivilege":{"enabled_by_default":true,"enabled":true},"SeSystemtimePrivilege":{"enabled":false},"SeTakeOwnershipPrivilege":{"enabled":false},"SeTcbPrivilege":{"enabled_by_default":true,"enabled":true},"SeTimeZonePrivilege":{"enabled_by_default":true,"enabled":true},"SeUndockPrivilege":{"enabled":false}}}
2019-11-13T01:50:13.190-0500	INFO	helper/privileges_windows.go:87	SeDebugPrivilege is enabled. SeDebugPrivilege=(Default, Enabled)
2019-11-13T01:50:13.190-0500	WARN	[cfgwarn]	perfmon/perfmon.go:60	BETA: The perfmon metricset is beta
2019-11-13T01:50:13.439-0500	INFO	[monitoring]	log/log.go:118	Starting metrics logging every 30s
2019-11-13T01:50:15.716-0500	INFO	[monitoring]	log/log.go:153	Total non-zero metrics	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":312,"time":{"ms":312}},"total":{"ticks":624,"time":{"ms":624},"value":624},"user":{"ticks":312,"time":{"ms":312}}},"handles":{"open":357},"info":{"ephemeral_id":"f5c531e9-ffe3-46d1-b3e8-c30c4907fc32","uptime":{"ms":6061}},"memstats":{"gc_next":10999664,"memory_alloc":6158352,"memory_total":14547648,"rss":48840704},"runtime":{"goroutines":39}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":0}}},"system":{"cpu":{"cores":1}}}}}
2019-11-13T01:50:15.716-0500	INFO	[monitoring]	log/log.go:154	Uptime: 6.0771485s
2019-11-13T01:50:15.716-0500	INFO	[monitoring]	log/log.go:131	Stopping metrics logging.
2019-11-13T01:50:15.716-0500	INFO	instance/beat.go:432	metricbeat stopped.
2019-11-13T01:50:15.716-0500	ERROR	instance/beat.go:878	Exiting: 1 error: initialization of reader failed: failed to expand counter (query="\PhysicalDisk(*)\Disk Writes/sec")

Here is a run on the same machine that succeeded after some fiddling (I snipped some of the privileges_windows stuff due to character limits)

2019-11-13T02:38:52.295-0500	INFO	instance/beat.go:422	metricbeat start running.
2019-11-13T02:38:52.306-0500	INFO	helper/privileges_windows.go:79	Metricbeat process and system info: {"OSVersion":{"Major":6,"Minor":1,"Build":7601},"Arch":"amd64","NumCPU":1,"User":{"SID":"S-1-5-18","Account":"SYSTEM","Domain":"NT AUTHORITY","Type":1}, [SNIPPED]
2019-11-13T02:38:52.307-0500	INFO	helper/privileges_windows.go:87	SeDebugPrivilege is enabled. SeDebugPrivilege=(Default, Enabled)
2019-11-13T02:38:52.311-0500	WARN	[cfgwarn]	perfmon/perfmon.go:60	BETA: The perfmon metricset is beta
2019-11-13T02:38:52.322-0500	INFO	[monitoring]	log/log.go:118	Starting metrics logging every 30s
2019-11-13T02:38:53.634-0500	INFO	cfgfile/reload.go:171	Config reloader started
2019-11-13T02:38:53.643-0500	WARN	[cfgwarn]	perfmon/perfmon.go:60	BETA: The perfmon metricset is beta
2019-11-13T02:38:53.654-0500	INFO	cfgfile/reload.go:226	Loading of config files completed.
2019-11-13T02:38:55.316-0500	INFO	add_cloud_metadata/add_cloud_metadata.go:87	add_cloud_metadata: hosting provider type not detected.
2019-11-13T02:38:56.338-0500	INFO	pipeline/output.go:95	Connecting to backoff(elasticsearch(http://172.26.0.15:9200))
2019-11-13T02:38:56.355-0500	INFO	elasticsearch/client.go:743	Attempting to connect to Elasticsearch version 7.4.0
2019-11-13T02:38:56.769-0500	INFO	[index-management]	idxmgmt/std.go:252	Auto ILM enable success.
2019-11-13T02:38:56.770-0500	INFO	[index-management.ilm]	ilm/std.go:134	do not generate ilm policy: exists=true, overwrite=false
2019-11-13T02:38:56.770-0500	INFO	[index-management]	idxmgmt/std.go:265	ILM policy successfully loaded.
2019-11-13T02:38:56.770-0500	INFO	[index-management]	idxmgmt/std.go:394	Set setup.template.name to '{metricbeat-7.4.2 {now/d}-000001}' as ILM is enabled.
2019-11-13T02:38:56.770-0500	INFO	[index-management]	idxmgmt/std.go:399	Set setup.template.pattern to 'metricbeat-7.4.2-*' as ILM is enabled.
2019-11-13T02:38:56.770-0500	INFO	[index-management]	idxmgmt/std.go:433	Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.4.2 {now/d}-000001} as ILM is enabled.
2019-11-13T02:38:56.770-0500	INFO	[index-management]	idxmgmt/std.go:437	Set settings.index.lifecycle.name in template to {metricbeat-7.4.2 {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2019-11-13T02:38:56.790-0500	INFO	template/load.go:88	Template metricbeat-7.4.2 already exists and will not be overwritten.
2019-11-13T02:38:56.790-0500	INFO	[index-management]	idxmgmt/std.go:289	Loaded index template.
2019-11-13T02:38:56.807-0500	INFO	[index-management]	idxmgmt/std.go:300	Write alias successfully generated.
2019-11-13T02:38:56.807-0500	INFO	pipeline/output.go:105	Connection to backoff(elasticsearch(http://172.26.0.15:9200)) established
2019-11-13T02:39:22.543-0500	INFO	[monitoring]	log/log.go:145	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":484,"time":{"ms":484}},"total":{"ticks":859,"time":{"ms":859},"value":859},"user":{"ticks":375,"time":{"ms":375}}},"handles":{"open":270},"info":{"ephemeral_id":"4804a444-7484-4ee5-be10-094c56831ffd","uptime":{"ms":33545}},"memstats":{"gc_next":13834336,"memory_alloc":9862160,"memory_total":20386064,"rss":47484928},"runtime":{"goroutines":48}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":191,"batches":8,"total":191},"read":{"bytes":6397},"type":"elasticsearch","write":{"bytes":199572}},"pipeline":{"clients":5,"events":{"active":0,"published":191,"retry":50,"total":191},"queue":{"acked":191}}},"metricbeat":{"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":1,"success":1},"fsstat":{"events":1,"success":1},"memory":{"events":3,"success":3},"network":{"events":9,"success":9},"process":{"events":27,"success":27},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3},"uptime":{"events":1,"success":1}},"windows":{"service":{"events":140,"success":140}}},"system":{"cpu":{"cores":1}}}}}
2019-11-13T02:39:51.180-0500	INFO	cfgfile/reload.go:229	Dynamic config reloader stopped
2019-11-13T02:39:51.180-0500	INFO	[reload]	cfgfile/list.go:118	Stopping 5 runners ...
2019-11-13T02:39:51.180-0500	INFO	[monitoring]	log/log.go:153	Total non-zero metrics	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":562,"time":{"ms":562}},"total":{"ticks":1046,"time":{"ms":1046},"value":1046},"user":{"ticks":484,"time":{"ms":484}}},"handles":{"open":264},"info":{"ephemeral_id":"4804a444-7484-4ee5-be10-094c56831ffd","uptime":{"ms":62000}},"memstats":{"gc_next":12361760,"memory_alloc":7810392,"memory_total":23366752,"rss":48037888},"runtime":{"goroutines":16}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":241,"batches":14,"total":241},"read":{"bytes":8801},"type":"elasticsearch","write":{"bytes":254401}},"pipeline":{"clients":0,"events":{"active":0,"published":241,"retry":50,"total":241},"queue":{"acked":241}}},"metricbeat":{"system":{"cpu":{"events":6,"success":6},"filesystem":{"events":1,"success":1},"fsstat":{"events":1,"success":1},"memory":{"events":6,"success":6},"network":{"events":18,"success":18},"process":{"events":51,"success":51},"process_summary":{"events":6,"success":6},"socket_summary":{"events":6,"success":6},"uptime":{"events":1,"success":1}},"windows":{"perfmon":{"events":5,"success":5},"service":{"events":140,"success":140}}},"system":{"cpu":{"cores":1}}}}}
2019-11-13T02:39:51.180-0500	INFO	[monitoring]	log/log.go:154	Uptime: 1m2.0009766s
2019-11-13T02:39:51.180-0500	INFO	[monitoring]	log/log.go:131	Stopping metrics logging.
2019-11-13T02:39:51.180-0500	INFO	instance/beat.go:432	metricbeat stopped.

The issue still occours in Version 7.4.2!

hi *
have opened a ticket to this problem.
https://github.com/elastic/beats/issues/14684

:+1:

Sorry to jump into this thread semi off topic but is Metricbeat supported on Windows Server 2008 R2? After looking at the compatibility matrix online 2008 isnt listed. From a search in the forums it seems like people are running it on 2008 without any major issues. Maybe the newer versions dont support it? Even with that the compatibility matrix still doesn't say anything about the older versions.

The issue still occours in version 7.5.0

hi @jbeyer, @Stefan_Sabolowitsch, a PR with a possible fix has been merged today and we are working on backporting this fix to future releases, PR's linked in the main issue https://github.com/elastic/beats/issues/14684. You can follow the updates in the gh issue.

We suffer a similar issue on Win 2012, 2016 & 2019 servers, metricbeat v7.4.2 when using perfmon counters.

initialization of reader failed: failed to expand counter (query="\APP_POOL_WAS(*)\Current Application Pool State")

As soon as we remove the wildcard star character and put the full value in, eg the number 1 or 2 or 3 it works. But this means that we would have to hard code to match the number of app pools per-server and we dont have a consistant number in our fleet so thatd end up doing lots of queries on some servers for no benefit even when we ignore errors.

Is there a better workaround for this wildcard issue ?

In Version 7.5.1 Metricbeat runs now as expected with the localized performance counter. :+1: